[jira] [Commented] (HIVE-4280) TestRetryingHMSHandler is failing on trunk.

2013-04-04 Thread Teddy Choi (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623416#comment-13623416
 ] 

Teddy Choi commented on HIVE-4280:
--

I ran unit tests several times, but they failed. It seems like there is a cause 
of failures on my machine, such as configuration change or a problem in base 
version.

So I merged the recent trunk and started a new test on other machine. I will 
report again when it is finished.

> TestRetryingHMSHandler is failing on trunk.
> ---
>
> Key: HIVE-4280
> URL: https://issues.apache.org/jira/browse/HIVE-4280
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.11.0
>Reporter: Ashutosh Chauhan
>Assignee: Teddy Choi
>
> Newly added testcase TestRetryingHMSHandler fails on trunk. 
> https://builds.apache.org/job/Hive-trunk-h0.21/2040/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4241) optimize hive.enforce.sorting and hive.enforce bucketing join

2013-04-04 Thread Namit Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namit Jain updated HIVE-4241:
-

Attachment: hive.4241.2.patch-nohcat

> optimize hive.enforce.sorting and hive.enforce bucketing join
> -
>
> Key: HIVE-4241
> URL: https://issues.apache.org/jira/browse/HIVE-4241
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Namit Jain
>Assignee: Namit Jain
> Attachments: hive.4241.1.patch, hive.4241.1.patch-nohcat, 
> hive.4241.2.patch-nohcat
>
>
> Consider the following scenario:
> T1: sorted and bucketed by key into 2 buckets
> T2: sorted and bucketed by key into 2 buckets
> T3: sorted and bucketed by key into 2 buckets
> set hive.enforce.sorting=true;
> set hive.enforce.bucketing=true;
> insert overwrite table T3
> select .. from T1 join T2 on T1.key = T2.key;
> Since T1, T2 and T3 are sorted/bucketed by the join, and the above join is
> being performed as a sort-merge join, T3 should be bucketed/sorted without
> the need for an extra reducer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hive-trunk-hadoop2 - Build # 140 - Still Failing

2013-04-04 Thread Apache Jenkins Server
Changes for Build #138
[namit] HIVE-4289 HCatalog build fails when behind a firewall
(Samuel Yuan via namit)

[namit] HIVE-4281 add hive.map.groupby.sorted.testmode
(Namit via Gang Tim Liu)

[hashutosh] Moving hcatalog site outside of trunk

[hashutosh] Moving hcatalog branches outside of trunk

[hashutosh] HIVE-4259 : SEL operator created with missing columnExprMap for 
unions (Gunther Hagleitner via Ashutosh Chauhan)

[hashutosh] HIVE-4156 : need to add protobuf classes to hive-exec.jar (Owen 
Omalley via Ashutosh Chauhan)

[hashutosh] HIVE-3464 : Merging join tree may reorder joins which could be 
invalid (Navis via Ashutosh Chauhan)

[hashutosh] HIVE-4138 : ORC's union object inspector returns a type name that 
isn't parseable by TypeInfoUtils (Owen Omalley via Ashutosh Chauhan)

[cws] HIVE-4119. ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails with 
NPE if the table is empty (Shreepadma Venugopalan via cws)

[hashutosh] HIVE-4252 : hiveserver2 string representation of complex types are 
inconsistent with cli (Thejas Nair via Ashutosh Chauhan)

[hashutosh] HIVE-4179 : NonBlockingOpDeDup does not merge SEL operators 
correctly (Gunther Hagleitner via Ashutosh Chauhan)

[hashutosh] HIVE-4269 : fix handling of binary type in hiveserver2, jdbc driver 
(Thejas Nair via Ashutosh Chauhan)

[namit] HIVE-4174 Round UDF converts BigInts to double
(Chen Chun via namit)

[namit] HIVE-4240 optimize hive.enforce.bucketing and hive.enforce sorting 
insert
(Gang Tim Liu via namit)

[navis] HIVE-4288 Add IntelliJ project files files to .gitignore (Roshan Naik 
via Navis)

[namit] HIVE-4272 partition wise metadata does not work for text files

[hashutosh] HIVE-896 : Add LEAD/LAG/FIRST/LAST analytical windowing functions 
to Hive. (Harish Butani via Ashutosh Chauhan)

[namit] HIVE-4260 union_remove_12, union_remove_13 are failing on hadoop2
(Gunther Hagleitner via namit)

[hashutosh] HIVE-3951 : Allow Decimal type columns in Regex Serde (Mark Grover 
via Ashutosh Chauhan)

[namit] HIVE-4270 bug in hive.map.groupby.sorted in the presence of multiple 
input partitions
(Namit via Gang Tim Liu)

[hashutosh] HIVE-3850 : hour() function returns 12 hour clock value when using 
timestamp datatype (Anandha and Franklin via Ashutosh Chauhan)

[hashutosh] HIVE-4122 : Queries fail if timestamp data not in expected format 
(Prasad Mujumdar via Ashutosh Chauhan)

[hashutosh] HIVE-4170 : [REGRESSION] FsShell.close closes filesystem, removing 
temporary directories (Navis via Ashutosh Chauhan)

[gates] HIVE-4264 Moved hcatalog trunk code up to hive/trunk/hcatalog

[hashutosh] HIVE-4263 : Adjust build.xml package command to move all hcat jars 
and binaries into build (Alan Gates via Ashutosh Chauhan)

[namit] HIVE-4258 Log logical plan tree for debugging
(Navis via namit)

[navis] HIVE-2264 Hive server is SHUTTING DOWN when invalid queries beeing 
executed

[kevinwilfong] HIVE-4235. CREATE TABLE IF NOT EXISTS uses inefficient way to 
check if table exists. (Gang Tim Liu via kevinwilfong)

[gangtimliu] HIVE-4157: ORC runs out of heap when writing (Kevin Wilfong vi 
Gang Tim Liu)

[gangtimliu] HIVE-4155: Expose ORC's FileDump as a service

[gangtimliu] HIVE-4159:RetryingHMSHandler doesn't retry in enough cases (Kevin 
Wilfong vi Gang Tim Liu)

[namit] HIVE-4149 wrong results big outer joins with array of ints
(Navis via namit)

[namit] HIVE-3958 support partial scan for analyze command - RCFile
(Gang Tim Liu via namit)

[gates] Removing old branches to limit size of Hive downloads.

[gates] Removing tags directory as we no longer need them and they're in the 
history.

[gates] Moving HCatalog into Hive.

[gates] Test that perms work for hcatalog

[hashutosh] HIVE-4007 : Create abstract classes for serializer and deserializer 
(Namit Jain via Ashutosh Chauhan)

[hashutosh] HIVE-3381 : Result of outer join is not valid (Navis via Ashutosh 
Chauhan)

[hashutosh] HIVE-3980 : Cleanup after 3403 (Namit Jain via Ashutosh Chauhan)

[hashutosh] HIVE-4042 : ignore mapjoin hint (Namit Jain via Ashutosh Chauhan)

[namit] HIVE-3348 semi-colon in comments in .q file does not work
(Nick Collins via namit)

[namit] HIVE-4212 sort merge join should work for outer joins for more than 8 
inputs
(Namit via Gang Tim Liu)

[namit] HIVE-4219 explain dependency does not capture the input table
(Namit via Gang Tim Liu)

[kevinwilfong] HIVE-4092. Store complete names of tables in column access 
analyzer (Samuel Yuan via kevinwilfong)

[namit] HIVE-4208 Clientpositive test parenthesis_star_by is non-deteministic
(Mark Grover via namit)

[cws] HIVE-4217. Fix show_create_table_*.q test failures (Carl Steinbach via 
cws)

[namit] HIVE-4206 Sort merge join does not work for outer joins for 7 inputs
(Namit via Gang Tim Liu)

[kevinwilfong] HIVE-4188. TestJdbcDriver2.testDescribeTable failing 
consistently. (Prasad Mujumdar via kevinwilfong)

[hashutosh] HIVE-3820 Consider creating a literal like D or BD for representing 
Decimal type constants (Gunther Hagleitner v

[jira] [Resolved] (HIVE-4297) LvJ operator does not have colExprMap for columns from UDTF

2013-04-04 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis resolved HIVE-4297.
-

Resolution: Duplicate

Modification was minimal, so merging into HIVE-4293

> LvJ operator does not have colExprMap for columns from UDTF
> ---
>
> Key: HIVE-4297
> URL: https://issues.apache.org/jira/browse/HIVE-4297
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
>
> The mapping information is needed for HIVE-4293

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3992) Hive RCFile::sync(long) does a sub-sequence linear search for sync blocks

2013-04-04 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623331#comment-13623331
 ] 

Gopal V commented on HIVE-3992:
---

Combined with Ashutosh's comment on using Guava, it makes sense to use Guava's 
cache impls instead of implementing my own from scratch.

Will update the patch today.

> Hive RCFile::sync(long) does a sub-sequence linear search for sync blocks
> -
>
> Key: HIVE-3992
> URL: https://issues.apache.org/jira/browse/HIVE-3992
> Project: Hive
>  Issue Type: Bug
> Environment: Ubuntu x86_64/java-1.6/hadoop-2.0.3
>Reporter: Gopal V
>Assignee: Gopal V
> Attachments: HIVE-3992.2.patch, HIVE-3992.patch, 
> select-join-limit.html
>
>
> The following function does some bad I/O
> {code}
> public synchronized void sync(long position) throws IOException {
>   ...
>   try {
> seek(position + 4); // skip escape
> in.readFully(syncCheck);
> int syncLen = sync.length;
> for (int i = 0; in.getPos() < end; i++) {
>   int j = 0;
>   for (; j < syncLen; j++) {
> if (sync[j] != syncCheck[(i + j) % syncLen]) {
>   break;
> }
>   }
>   if (j == syncLen) {
> in.seek(in.getPos() - SYNC_SIZE); // position before
> // sync
> return;
>   }
>   syncCheck[i % syncLen] = in.readByte();
> }
>   }
> ...
> }
> {code}
> This causes a rather large number of readByte() calls which are passed onto a 
> ByteBuffer via a single byte array.
> This results in rather a large amount of CPU being burnt in a the linear 
> search for the sync pattern in the input RCFile (upto 92% for a skewed 
> example - a trivial map-join + limit 100).
> This behaviour should be avoided at best or at least replaced by a rolling 
> hash for efficient comparison, since it has a known byte-width of 16 bytes.
> Attached the stack trace from a Yourkit profile.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hive-trunk-h0.21 - Build # 2046 - Still Failing

2013-04-04 Thread Apache Jenkins Server
Changes for Build #2032
[namit] HIVE-4219 explain dependency does not capture the input table
(Namit via Gang Tim Liu)


Changes for Build #2033
[gates] Removing old branches to limit size of Hive downloads.

[gates] Removing tags directory as we no longer need them and they're in the 
history.

[gates] Moving HCatalog into Hive.

[gates] Test that perms work for hcatalog

[hashutosh] HIVE-4007 : Create abstract classes for serializer and deserializer 
(Namit Jain via Ashutosh Chauhan)

[hashutosh] HIVE-3381 : Result of outer join is not valid (Navis via Ashutosh 
Chauhan)

[hashutosh] HIVE-3980 : Cleanup after 3403 (Namit Jain via Ashutosh Chauhan)

[hashutosh] HIVE-4042 : ignore mapjoin hint (Namit Jain via Ashutosh Chauhan)

[namit] HIVE-3348 semi-colon in comments in .q file does not work
(Nick Collins via namit)

[namit] HIVE-4212 sort merge join should work for outer joins for more than 8 
inputs
(Namit via Gang Tim Liu)


Changes for Build #2034
[namit] HIVE-3958 support partial scan for analyze command - RCFile
(Gang Tim Liu via namit)


Changes for Build #2035
[kevinwilfong] HIVE-4235. CREATE TABLE IF NOT EXISTS uses inefficient way to 
check if table exists. (Gang Tim Liu via kevinwilfong)

[gangtimliu] HIVE-4157: ORC runs out of heap when writing (Kevin Wilfong vi 
Gang Tim Liu)

[gangtimliu] HIVE-4155: Expose ORC's FileDump as a service

[gangtimliu] HIVE-4159:RetryingHMSHandler doesn't retry in enough cases (Kevin 
Wilfong vi Gang Tim Liu)

[namit] HIVE-4149 wrong results big outer joins with array of ints
(Navis via namit)


Changes for Build #2036
[gates] HIVE-4264 Moved hcatalog trunk code up to hive/trunk/hcatalog

[hashutosh] HIVE-4263 : Adjust build.xml package command to move all hcat jars 
and binaries into build (Alan Gates via Ashutosh Chauhan)

[namit] HIVE-4258 Log logical plan tree for debugging
(Navis via namit)

[navis] HIVE-2264 Hive server is SHUTTING DOWN when invalid queries beeing 
executed


Changes for Build #2037

Changes for Build #2038
[hashutosh] HIVE-4122 : Queries fail if timestamp data not in expected format 
(Prasad Mujumdar via Ashutosh Chauhan)

[hashutosh] HIVE-4170 : [REGRESSION] FsShell.close closes filesystem, removing 
temporary directories (Navis via Ashutosh Chauhan)


Changes for Build #2039
[hashutosh] HIVE-3850 : hour() function returns 12 hour clock value when using 
timestamp datatype (Anandha and Franklin via Ashutosh Chauhan)


Changes for Build #2040
[hashutosh] HIVE-3951 : Allow Decimal type columns in Regex Serde (Mark Grover 
via Ashutosh Chauhan)

[namit] HIVE-4270 bug in hive.map.groupby.sorted in the presence of multiple 
input partitions
(Namit via Gang Tim Liu)


Changes for Build #2041

Changes for Build #2042

Changes for Build #2043
[hashutosh] HIVE-4252 : hiveserver2 string representation of complex types are 
inconsistent with cli (Thejas Nair via Ashutosh Chauhan)

[hashutosh] HIVE-4179 : NonBlockingOpDeDup does not merge SEL operators 
correctly (Gunther Hagleitner via Ashutosh Chauhan)

[hashutosh] HIVE-4269 : fix handling of binary type in hiveserver2, jdbc driver 
(Thejas Nair via Ashutosh Chauhan)

[namit] HIVE-4174 Round UDF converts BigInts to double
(Chen Chun via namit)

[namit] HIVE-4240 optimize hive.enforce.bucketing and hive.enforce sorting 
insert
(Gang Tim Liu via namit)

[navis] HIVE-4288 Add IntelliJ project files files to .gitignore (Roshan Naik 
via Navis)


Changes for Build #2044
[namit] HIVE-4289 HCatalog build fails when behind a firewall
(Samuel Yuan via namit)

[namit] HIVE-4281 add hive.map.groupby.sorted.testmode
(Namit via Gang Tim Liu)

[hashutosh] Moving hcatalog site outside of trunk

[hashutosh] Moving hcatalog branches outside of trunk

[hashutosh] HIVE-4259 : SEL operator created with missing columnExprMap for 
unions (Gunther Hagleitner via Ashutosh Chauhan)

[hashutosh] HIVE-4156 : need to add protobuf classes to hive-exec.jar (Owen 
Omalley via Ashutosh Chauhan)

[hashutosh] HIVE-3464 : Merging join tree may reorder joins which could be 
invalid (Navis via Ashutosh Chauhan)

[hashutosh] HIVE-4138 : ORC's union object inspector returns a type name that 
isn't parseable by TypeInfoUtils (Owen Omalley via Ashutosh Chauhan)

[cws] HIVE-4119. ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails with 
NPE if the table is empty (Shreepadma Venugopalan via cws)


Changes for Build #2045

Changes for Build #2046
[hashutosh] HIVE-4067 : Followup to HIVE-701: reduce ambiguity in grammar 
(Samuel Yuan via Ashutosh Chauhan)




No tests ran.

The Apache Jenkins build system has built Hive-trunk-h0.21 (build #2046)

Status: Still Failing

Check console output at https://builds.apache.org/job/Hive-trunk-h0.21/2046/ to 
view the results.

[jira] [Commented] (HIVE-701) lots of reserved keywords in hive

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623321#comment-13623321
 ] 

Hudson commented on HIVE-701:
-

Integrated in Hive-trunk-h0.21 #2046 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2046/])
HIVE-4067 : Followup to HIVE-701: reduce ambiguity in grammar (Samuel Yuan 
via Ashutosh Chauhan) (Revision 1464808)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1464808
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g
* 
/hive/trunk/ql/src/test/queries/clientpositive/nonreserved_keywords_insert_into1.q
* 
/hive/trunk/ql/src/test/results/clientpositive/nonreserved_keywords_insert_into1.q.out
* /hive/trunk/ql/src/test/results/compiler/errors/wrong_distinct2.q.out


> lots of reserved keywords in hive
> -
>
> Key: HIVE-701
> URL: https://issues.apache.org/jira/browse/HIVE-701
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Processor
>Reporter: Namit Jain
>Assignee: Samuel Yuan
> Fix For: 0.11.0
>
> Attachments: HIVE-701.1.patch.txt, HIVE-701.2.patch.txt, 
> HIVE-701.D8397.1.patch, HIVE-701.HIVE-701.D8397.2.patch, 
> HIVE-701.HIVE-701.D8397.3.patch
>
>
> There is a problem if we want to use some reserved keywords:
> for example, creating a function of name left/right ? left/right is already a 
> reserved keyword.
> The other way around should also be possible - if we want to add a 'show 
> tables status' and some applications already use status as a column name, 
> they should not break

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4067) Followup to HIVE-701: reduce ambiguity in grammar

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623320#comment-13623320
 ] 

Hudson commented on HIVE-4067:
--

Integrated in Hive-trunk-h0.21 #2046 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2046/])
HIVE-4067 : Followup to HIVE-701: reduce ambiguity in grammar (Samuel Yuan 
via Ashutosh Chauhan) (Revision 1464808)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1464808
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g
* 
/hive/trunk/ql/src/test/queries/clientpositive/nonreserved_keywords_insert_into1.q
* 
/hive/trunk/ql/src/test/results/clientpositive/nonreserved_keywords_insert_into1.q.out
* /hive/trunk/ql/src/test/results/compiler/errors/wrong_distinct2.q.out


> Followup to HIVE-701: reduce ambiguity in grammar
> -
>
> Key: HIVE-4067
> URL: https://issues.apache.org/jira/browse/HIVE-4067
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.11.0
>Reporter: Samuel Yuan
>Assignee: Samuel Yuan
>Priority: Minor
> Fix For: 0.11.0
>
> Attachments: HIVE-4067.D8883.1.patch, 
> HIVE-4067.HIVE-4067.HIVE-4067.HIVE-4067.D8883.2.patch
>
>
> After HIVE-701 the grammar has become much more ambiguous, and the 
> compilation generates a large number of warnings. Making FROM, DISTINCT, 
> PRESERVE, COLUMN, ALL, AND, OR, and NOT reserved keywords again reduces the 
> number of warnings to 134, up from the original 81 warnings but down from the 
> 565 after HIVE-701. Most of the remaining ambiguity is trivial, an example 
> being "KW_ELEM_TYPE | KW_KEY_TYPE | KW_VALUE_TYPE | identifier", and they are 
> all correctly handled by ANTLR.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4297) LvJ operator does not have colExprMap for columns from UDTF

2013-04-04 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623289#comment-13623289
 ] 

Navis commented on HIVE-4297:
-

Made separate issue because CP for lateral view should be modified.

> LvJ operator does not have colExprMap for columns from UDTF
> ---
>
> Key: HIVE-4297
> URL: https://issues.apache.org/jira/browse/HIVE-4297
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
>
> The mapping information is needed for HIVE-4293

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4295) Lateral view makes invalid result if CP is disabled

2013-04-04 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-4295:


Status: Patch Available  (was: Open)

> Lateral view makes invalid result if CP is disabled
> ---
>
> Key: HIVE-4295
> URL: https://issues.apache.org/jira/browse/HIVE-4295
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Attachments: HIVE-4295.D9963.1.patch
>
>
> For example,
> {noformat}
> >SELECT src.key, myKey, myVal FROM src lateral view 
> >explode(map(1,'one',2,'two',3,'three')) x AS myKey,myVal LIMIT 3;
> 238   1   one
> 238   2   two
> 238   3   three
> {noformat}
> After CP disabled,
> {noformat}
> >SELECT src.key, myKey, myVal FROM src lateral view 
> >explode(map(1,'one',2,'two',3,'three')) x AS myKey,myVal LIMIT 3;
> 238   0   hdfs://localhost:9000/user/hive/warehouse/src/kv1.txt
> 238   0   hdfs://localhost:9000/user/hive/warehouse/src/kv1.txt
> 238   0   hdfs://localhost:9000/user/hive/warehouse/src/kv1.txt
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4295) Lateral view makes invalid result if CP is disabled

2013-04-04 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-4295:
--

Attachment: HIVE-4295.D9963.1.patch

navis requested code review of "HIVE-4295 [jira] Lateral view makes invalid 
result if CP is disabled".

Reviewers: JIRA

HIVE-4295 Lateral view makes invalid result if CP is disabled

For example,


238 1   one
238 2   two
238 3   three

After CP disabled,


238 0   hdfs://localhost:9000/user/hive/warehouse/src/kv1.txt
238 0   hdfs://localhost:9000/user/hive/warehouse/src/kv1.txt
238 0   hdfs://localhost:9000/user/hive/warehouse/src/kv1.txt

TEST PLAN
  EMPTY

REVISION DETAIL
  https://reviews.facebook.net/D9963

AFFECTED FILES
  ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
  ql/src/test/queries/clientpositive/udtf_explode.q
  ql/src/test/results/clientpositive/lateral_view.q.out
  ql/src/test/results/clientpositive/lateral_view_ppd.q.out
  ql/src/test/results/clientpositive/udtf_explode.q.out
  ql/src/test/results/clientpositive/udtf_json_tuple.q.out
  ql/src/test/results/clientpositive/udtf_parse_url_tuple.q.out
  ql/src/test/results/clientpositive/udtf_stack.q.out
  ql/src/test/results/clientpositive/union26.q.out

MANAGE HERALD RULES
  https://reviews.facebook.net/herald/view/differential/

WHY DID I GET THIS EMAIL?
  https://reviews.facebook.net/herald/transcript/23799/

To: JIRA, navis


> Lateral view makes invalid result if CP is disabled
> ---
>
> Key: HIVE-4295
> URL: https://issues.apache.org/jira/browse/HIVE-4295
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Attachments: HIVE-4295.D9963.1.patch
>
>
> For example,
> {noformat}
> >SELECT src.key, myKey, myVal FROM src lateral view 
> >explode(map(1,'one',2,'two',3,'three')) x AS myKey,myVal LIMIT 3;
> 238   1   one
> 238   2   two
> 238   3   three
> {noformat}
> After CP disabled,
> {noformat}
> >SELECT src.key, myKey, myVal FROM src lateral view 
> >explode(map(1,'one',2,'two',3,'three')) x AS myKey,myVal LIMIT 3;
> 238   0   hdfs://localhost:9000/user/hive/warehouse/src/kv1.txt
> 238   0   hdfs://localhost:9000/user/hive/warehouse/src/kv1.txt
> 238   0   hdfs://localhost:9000/user/hive/warehouse/src/kv1.txt
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3996) Correctly enforce the memory limit on the multi-table map-join

2013-04-04 Thread Vikram Dixit K (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623287#comment-13623287
 ] 

Vikram Dixit K commented on HIVE-3996:
--

Comments addressed. [~namit] please let me know.

Thanks
Vikram.

> Correctly enforce the memory limit on the multi-table map-join
> --
>
> Key: HIVE-3996
> URL: https://issues.apache.org/jira/browse/HIVE-3996
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 0.11.0
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
> Attachments: HIVE-3996_2.patch, HIVE-3996_3.patch, HIVE-3996_4.patch, 
> HIVE-3996_5.patch, HIVE-3996_6.patch, HIVE-3996_7.patch, HIVE-3996.patch
>
>
> Currently with HIVE-3784, the joins are converted to map-joins based on 
> checks of the table size against the config variable: 
> hive.auto.convert.join.noconditionaltask.size. 
> However, the current implementation will also merge multiple mapjoin 
> operators into a single task regardless of whether the sum of the table sizes 
> will exceed the configured value.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hive-trunk-hadoop2 - Build # 139 - Still Failing

2013-04-04 Thread Apache Jenkins Server
Changes for Build #138
[namit] HIVE-4289 HCatalog build fails when behind a firewall
(Samuel Yuan via namit)

[namit] HIVE-4281 add hive.map.groupby.sorted.testmode
(Namit via Gang Tim Liu)

[hashutosh] Moving hcatalog site outside of trunk

[hashutosh] Moving hcatalog branches outside of trunk

[hashutosh] HIVE-4259 : SEL operator created with missing columnExprMap for 
unions (Gunther Hagleitner via Ashutosh Chauhan)

[hashutosh] HIVE-4156 : need to add protobuf classes to hive-exec.jar (Owen 
Omalley via Ashutosh Chauhan)

[hashutosh] HIVE-3464 : Merging join tree may reorder joins which could be 
invalid (Navis via Ashutosh Chauhan)

[hashutosh] HIVE-4138 : ORC's union object inspector returns a type name that 
isn't parseable by TypeInfoUtils (Owen Omalley via Ashutosh Chauhan)

[cws] HIVE-4119. ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails with 
NPE if the table is empty (Shreepadma Venugopalan via cws)

[hashutosh] HIVE-4252 : hiveserver2 string representation of complex types are 
inconsistent with cli (Thejas Nair via Ashutosh Chauhan)

[hashutosh] HIVE-4179 : NonBlockingOpDeDup does not merge SEL operators 
correctly (Gunther Hagleitner via Ashutosh Chauhan)

[hashutosh] HIVE-4269 : fix handling of binary type in hiveserver2, jdbc driver 
(Thejas Nair via Ashutosh Chauhan)

[namit] HIVE-4174 Round UDF converts BigInts to double
(Chen Chun via namit)

[namit] HIVE-4240 optimize hive.enforce.bucketing and hive.enforce sorting 
insert
(Gang Tim Liu via namit)

[navis] HIVE-4288 Add IntelliJ project files files to .gitignore (Roshan Naik 
via Navis)

[namit] HIVE-4272 partition wise metadata does not work for text files

[hashutosh] HIVE-896 : Add LEAD/LAG/FIRST/LAST analytical windowing functions 
to Hive. (Harish Butani via Ashutosh Chauhan)

[namit] HIVE-4260 union_remove_12, union_remove_13 are failing on hadoop2
(Gunther Hagleitner via namit)

[hashutosh] HIVE-3951 : Allow Decimal type columns in Regex Serde (Mark Grover 
via Ashutosh Chauhan)

[namit] HIVE-4270 bug in hive.map.groupby.sorted in the presence of multiple 
input partitions
(Namit via Gang Tim Liu)

[hashutosh] HIVE-3850 : hour() function returns 12 hour clock value when using 
timestamp datatype (Anandha and Franklin via Ashutosh Chauhan)

[hashutosh] HIVE-4122 : Queries fail if timestamp data not in expected format 
(Prasad Mujumdar via Ashutosh Chauhan)

[hashutosh] HIVE-4170 : [REGRESSION] FsShell.close closes filesystem, removing 
temporary directories (Navis via Ashutosh Chauhan)

[gates] HIVE-4264 Moved hcatalog trunk code up to hive/trunk/hcatalog

[hashutosh] HIVE-4263 : Adjust build.xml package command to move all hcat jars 
and binaries into build (Alan Gates via Ashutosh Chauhan)

[namit] HIVE-4258 Log logical plan tree for debugging
(Navis via namit)

[navis] HIVE-2264 Hive server is SHUTTING DOWN when invalid queries beeing 
executed

[kevinwilfong] HIVE-4235. CREATE TABLE IF NOT EXISTS uses inefficient way to 
check if table exists. (Gang Tim Liu via kevinwilfong)

[gangtimliu] HIVE-4157: ORC runs out of heap when writing (Kevin Wilfong vi 
Gang Tim Liu)

[gangtimliu] HIVE-4155: Expose ORC's FileDump as a service

[gangtimliu] HIVE-4159:RetryingHMSHandler doesn't retry in enough cases (Kevin 
Wilfong vi Gang Tim Liu)

[namit] HIVE-4149 wrong results big outer joins with array of ints
(Navis via namit)

[namit] HIVE-3958 support partial scan for analyze command - RCFile
(Gang Tim Liu via namit)

[gates] Removing old branches to limit size of Hive downloads.

[gates] Removing tags directory as we no longer need them and they're in the 
history.

[gates] Moving HCatalog into Hive.

[gates] Test that perms work for hcatalog

[hashutosh] HIVE-4007 : Create abstract classes for serializer and deserializer 
(Namit Jain via Ashutosh Chauhan)

[hashutosh] HIVE-3381 : Result of outer join is not valid (Navis via Ashutosh 
Chauhan)

[hashutosh] HIVE-3980 : Cleanup after 3403 (Namit Jain via Ashutosh Chauhan)

[hashutosh] HIVE-4042 : ignore mapjoin hint (Namit Jain via Ashutosh Chauhan)

[namit] HIVE-3348 semi-colon in comments in .q file does not work
(Nick Collins via namit)

[namit] HIVE-4212 sort merge join should work for outer joins for more than 8 
inputs
(Namit via Gang Tim Liu)

[namit] HIVE-4219 explain dependency does not capture the input table
(Namit via Gang Tim Liu)

[kevinwilfong] HIVE-4092. Store complete names of tables in column access 
analyzer (Samuel Yuan via kevinwilfong)

[namit] HIVE-4208 Clientpositive test parenthesis_star_by is non-deteministic
(Mark Grover via namit)

[cws] HIVE-4217. Fix show_create_table_*.q test failures (Carl Steinbach via 
cws)

[namit] HIVE-4206 Sort merge join does not work for outer joins for 7 inputs
(Namit via Gang Tim Liu)

[kevinwilfong] HIVE-4188. TestJdbcDriver2.testDescribeTable failing 
consistently. (Prasad Mujumdar via kevinwilfong)

[hashutosh] HIVE-3820 Consider creating a literal like D or BD for representing 
Decimal type constants (Gunther Hagleitner v

[jira] [Commented] (HIVE-701) lots of reserved keywords in hive

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623271#comment-13623271
 ] 

Hudson commented on HIVE-701:
-

Integrated in Hive-trunk-hadoop2 #139 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/139/])
HIVE-4067 : Followup to HIVE-701: reduce ambiguity in grammar (Samuel Yuan 
via Ashutosh Chauhan) (Revision 1464808)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1464808
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g
* 
/hive/trunk/ql/src/test/queries/clientpositive/nonreserved_keywords_insert_into1.q
* 
/hive/trunk/ql/src/test/results/clientpositive/nonreserved_keywords_insert_into1.q.out
* /hive/trunk/ql/src/test/results/compiler/errors/wrong_distinct2.q.out


> lots of reserved keywords in hive
> -
>
> Key: HIVE-701
> URL: https://issues.apache.org/jira/browse/HIVE-701
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Processor
>Reporter: Namit Jain
>Assignee: Samuel Yuan
> Fix For: 0.11.0
>
> Attachments: HIVE-701.1.patch.txt, HIVE-701.2.patch.txt, 
> HIVE-701.D8397.1.patch, HIVE-701.HIVE-701.D8397.2.patch, 
> HIVE-701.HIVE-701.D8397.3.patch
>
>
> There is a problem if we want to use some reserved keywords:
> for example, creating a function of name left/right ? left/right is already a 
> reserved keyword.
> The other way around should also be possible - if we want to add a 'show 
> tables status' and some applications already use status as a column name, 
> they should not break

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4067) Followup to HIVE-701: reduce ambiguity in grammar

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623270#comment-13623270
 ] 

Hudson commented on HIVE-4067:
--

Integrated in Hive-trunk-hadoop2 #139 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/139/])
HIVE-4067 : Followup to HIVE-701: reduce ambiguity in grammar (Samuel Yuan 
via Ashutosh Chauhan) (Revision 1464808)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1464808
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g
* 
/hive/trunk/ql/src/test/queries/clientpositive/nonreserved_keywords_insert_into1.q
* 
/hive/trunk/ql/src/test/results/clientpositive/nonreserved_keywords_insert_into1.q.out
* /hive/trunk/ql/src/test/results/compiler/errors/wrong_distinct2.q.out


> Followup to HIVE-701: reduce ambiguity in grammar
> -
>
> Key: HIVE-4067
> URL: https://issues.apache.org/jira/browse/HIVE-4067
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.11.0
>Reporter: Samuel Yuan
>Assignee: Samuel Yuan
>Priority: Minor
> Fix For: 0.11.0
>
> Attachments: HIVE-4067.D8883.1.patch, 
> HIVE-4067.HIVE-4067.HIVE-4067.HIVE-4067.D8883.2.patch
>
>
> After HIVE-701 the grammar has become much more ambiguous, and the 
> compilation generates a large number of warnings. Making FROM, DISTINCT, 
> PRESERVE, COLUMN, ALL, AND, OR, and NOT reserved keywords again reduces the 
> number of warnings to 134, up from the original 81 warnings but down from the 
> 565 after HIVE-701. Most of the remaining ambiguity is trivial, an example 
> being "KW_ELEM_TYPE | KW_KEY_TYPE | KW_VALUE_TYPE | identifier", and they are 
> all correctly handled by ANTLR.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4297) LvJ operator does not have colExprMap for columns from UDTF

2013-04-04 Thread Navis (JIRA)
Navis created HIVE-4297:
---

 Summary: LvJ operator does not have colExprMap for columns from 
UDTF
 Key: HIVE-4297
 URL: https://issues.apache.org/jira/browse/HIVE-4297
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Trivial


The mapping information is needed for HIVE-4293

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4295) Lateral view makes invalid result if CP is disabled

2013-04-04 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623265#comment-13623265
 ] 

Navis commented on HIVE-4295:
-

And LV which contains VC fails also in compile.

> Lateral view makes invalid result if CP is disabled
> ---
>
> Key: HIVE-4295
> URL: https://issues.apache.org/jira/browse/HIVE-4295
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
>
> For example,
> {noformat}
> >SELECT src.key, myKey, myVal FROM src lateral view 
> >explode(map(1,'one',2,'two',3,'three')) x AS myKey,myVal LIMIT 3;
> 238   1   one
> 238   2   two
> 238   3   three
> {noformat}
> After CP disabled,
> {noformat}
> >SELECT src.key, myKey, myVal FROM src lateral view 
> >explode(map(1,'one',2,'two',3,'three')) x AS myKey,myVal LIMIT 3;
> 238   0   hdfs://localhost:9000/user/hive/warehouse/src/kv1.txt
> 238   0   hdfs://localhost:9000/user/hive/warehouse/src/kv1.txt
> 238   0   hdfs://localhost:9000/user/hive/warehouse/src/kv1.txt
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4128) Support avg(decimal)

2013-04-04 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623264#comment-13623264
 ] 

Ashutosh Chauhan commented on HIVE-4128:


+1 will commit if tests pass.

> Support avg(decimal)
> 
>
> Key: HIVE-4128
> URL: https://issues.apache.org/jira/browse/HIVE-4128
> Project: Hive
>  Issue Type: Improvement
>Reporter: Brock Noland
>Assignee: Brock Noland
>Priority: Minor
> Fix For: 0.11.0
>
> Attachments: HIVE-4128-2.patch, HIVE-4128-3.patch, HIVE-4128-4.patch
>
>
> Currently the following query:
> {noformat}
> hive> select p_mfgr, avg(p_retailprice) from part group by p_mfgr;
> FAILED: UDFArgumentTypeException Only numeric or string type arguments are 
> accepted but decimal is passed
> {noformat}
> is not supported by hive but is on postgres.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2340) optimize orderby followed by a groupby

2013-04-04 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623262#comment-13623262
 ] 

Ashutosh Chauhan commented on HIVE-2340:


Cool. Running test on latest patch. Will commit if tests pass.

> optimize orderby followed by a groupby
> --
>
> Key: HIVE-2340
> URL: https://issues.apache.org/jira/browse/HIVE-2340
> Project: Hive
>  Issue Type: Sub-task
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
>  Labels: perfomance
> Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2340.D1209.1.patch, 
> ASF.LICENSE.NOT.GRANTED--HIVE-2340.D1209.2.patch, 
> ASF.LICENSE.NOT.GRANTED--HIVE-2340.D1209.3.patch, 
> ASF.LICENSE.NOT.GRANTED--HIVE-2340.D1209.4.patch, 
> ASF.LICENSE.NOT.GRANTED--HIVE-2340.D1209.5.patch, HIVE-2340.12.patch, 
> HIVE-2340.13.patch, HIVE-2340.14.patch, 
> HIVE-2340.14.rebased_and_schema_clone.patch, HIVE-2340.1.patch.txt, 
> HIVE-2340.D1209.10.patch, HIVE-2340.D1209.11.patch, HIVE-2340.D1209.12.patch, 
> HIVE-2340.D1209.13.patch, HIVE-2340.D1209.14.patch, HIVE-2340.D1209.15.patch, 
> HIVE-2340.D1209.6.patch, HIVE-2340.D1209.7.patch, HIVE-2340.D1209.8.patch, 
> HIVE-2340.D1209.9.patch, testclidriver.txt
>
>
> Before implementing optimizer for JOIN-GBY, try to implement RS-GBY 
> optimizer(cluster-by following group-by).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4166) closeAllForUGI causes failure in hiveserver2 when fetching large amount of data

2013-04-04 Thread Mithun Radhakrishnan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623257#comment-13623257
 ] 

Mithun Radhakrishnan commented on HIVE-4166:


For the record, I was able to stress-test a version of this code and verify 
that we're not leaking FileSystem instances. So we know that the fix for 
HIVE-3098 isn't undone here. This fix now works for HS2 and HCatalog.

> closeAllForUGI causes failure in hiveserver2 when fetching large amount of 
> data
> ---
>
> Key: HIVE-4166
> URL: https://issues.apache.org/jira/browse/HIVE-4166
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, Security, Shims
>Affects Versions: 0.10.0, 0.11.0
>Reporter: Chris Drome
>Assignee: Chris Drome
> Fix For: 0.10.0, 0.11.0
>
> Attachments: HIVE-4166-0.10.patch, HIVE-4166-trunk.patch
>
>
> HiveServer2 configured to use Kerberos authentication with doAs enabled 
> throws an exception when fetching a large amount of data from a query.
> The exception is caused because FileSystem.closeAllForUGI is always called at 
> the end of TUGIAssumingProcessor.process. This affects requests on the 
> ResultSet for data from a SELECT query when the amount of data exceeds a 
> certain size. At that point any subsequent calls to fetch more data throw an 
> exception because the underlying DFSClient has been closed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4067) Followup to HIVE-701: reduce ambiguity in grammar

2013-04-04 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-4067:
---

   Resolution: Fixed
Fix Version/s: 0.11.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Samuel!

> Followup to HIVE-701: reduce ambiguity in grammar
> -
>
> Key: HIVE-4067
> URL: https://issues.apache.org/jira/browse/HIVE-4067
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.11.0
>Reporter: Samuel Yuan
>Assignee: Samuel Yuan
>Priority: Minor
> Fix For: 0.11.0
>
> Attachments: HIVE-4067.D8883.1.patch, 
> HIVE-4067.HIVE-4067.HIVE-4067.HIVE-4067.D8883.2.patch
>
>
> After HIVE-701 the grammar has become much more ambiguous, and the 
> compilation generates a large number of warnings. Making FROM, DISTINCT, 
> PRESERVE, COLUMN, ALL, AND, OR, and NOT reserved keywords again reduces the 
> number of warnings to 134, up from the original 81 warnings but down from the 
> 565 after HIVE-701. Most of the remaining ambiguity is trivial, an example 
> being "KW_ELEM_TYPE | KW_KEY_TYPE | KW_VALUE_TYPE | identifier", and they are 
> all correctly handled by ANTLR.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4296) ant thriftif fails on hcatalog

2013-04-04 Thread Roshan Naik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roshan Naik updated HIVE-4296:
--

Attachment: HIVE-4296.patch

removing hcatalog from thriftif build target

> ant thriftif  fails on  hcatalog
> 
>
> Key: HIVE-4296
> URL: https://issues.apache.org/jira/browse/HIVE-4296
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.10.0
>Reporter: Roshan Naik
>Assignee: Roshan Naik
> Attachments: HIVE-4296.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HIVE-4296) ant thriftif fails on hcatalog

2013-04-04 Thread Roshan Naik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roshan Naik updated HIVE-4296:
--

Status: Patch Available  (was: Open)

> ant thriftif  fails on  hcatalog
> 
>
> Key: HIVE-4296
> URL: https://issues.apache.org/jira/browse/HIVE-4296
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.10.0
>Reporter: Roshan Naik
>Assignee: Roshan Naik
> Attachments: HIVE-4296.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4296) ant thriftif fails on hcatalog

2013-04-04 Thread Roshan Naik (JIRA)
Roshan Naik created HIVE-4296:
-

 Summary: ant thriftif  fails on  hcatalog
 Key: HIVE-4296
 URL: https://issues.apache.org/jira/browse/HIVE-4296
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.10.0
Reporter: Roshan Naik
Assignee: Roshan Naik




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HIVE-4295) Lateral view makes invalid result if CP is disabled

2013-04-04 Thread Navis (JIRA)
Navis created HIVE-4295:
---

 Summary: Lateral view makes invalid result if CP is disabled
 Key: HIVE-4295
 URL: https://issues.apache.org/jira/browse/HIVE-4295
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Minor


For example,
{noformat}
>SELECT src.key, myKey, myVal FROM src lateral view 
>explode(map(1,'one',2,'two',3,'three')) x AS myKey,myVal LIMIT 3;

238 1   one
238 2   two
238 3   three
{noformat}

After CP disabled,

{noformat}
>SELECT src.key, myKey, myVal FROM src lateral view 
>explode(map(1,'one',2,'two',3,'three')) x AS myKey,myVal LIMIT 3;

238 0   hdfs://localhost:9000/user/hive/warehouse/src/kv1.txt
238 0   hdfs://localhost:9000/user/hive/warehouse/src/kv1.txt
238 0   hdfs://localhost:9000/user/hive/warehouse/src/kv1.txt
{noformat}


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: HBase Types: Explicit Null Support

2013-04-04 Thread Nick Dimiduk
On Wed, Apr 3, 2013 at 11:29 AM, Dmitriy Ryaboy  wrote:

> Hiya Nick,
> Pig converts data for HBase storage using this class:
>
> https://svn.apache.org/repos/asf/pig/trunk/src/org/apache/pig/backend/hadoop/hbase/HBaseBinaryConverter.java(which
> is mostly just calling into HBase's Bytes class). As long as Bytes
> handles the null stuff, we'll just inherit the behavior.
>

Dmitriy,

Precisely how this will be exposed via the hbase client is TBD. We won't be
deprecating the existing Bytes utility from the client view, so a new API
for supporting these types will be provided. I'll be able to provide
support and/or a patch for Pig (et al) once  the implementation is a bit
further along.

My question for you as a Pig representative is more about how Pig users
expect Pig to handle NULLs. Are NULL values within a tuple a
common occurrence in Pig? In comparison, I'm thinking about the prevalence
of NULL in SQL.

Thanks,
Nick

On Tue, Apr 2, 2013 at 9:40 AM, Nick Dimiduk  wrote:
>
> > I agree that a user-extensible interface is a required feature here.
> > Personally, I'd love to ship a set of standard GIS tools on HBase. Let's
> > keep in mind, though, that SQL and user applications are not the only
> > consumers of this interface. A big motivation is allowing interop with
> the
> > other higher MR languages. *cough* Where are my Pig and Hive peeps in
> this
> > thread?
> >
> > On Mon, Apr 1, 2013 at 11:33 PM, James Taylor  > >wrote:
> >
> > > Maybe if we can keep nullability separate from the
> > > serialization/deserialization, we can come up with a solution that
> works?
> > > We're able to essentially infer that a column is null based on its
> value
> > > being missing or empty. So if an iterator through the row key bytes
> could
> > > detect/indicate that, then an application could "infer" the value is
> > null.
> > >
> > > We're definitely planning on keeping byte[] accessors for use cases
> that
> > > need it. I'm curious on the geographic data case, though, could you
> use a
> > > fixed length long with a couple of new SQL built-ins to encode/decode
> the
> > > latitude/longitude?
> > >
> > >
> > > On 04/01/2013 11:29 PM, Jesse Yates wrote:
> > >
> > >> Actually, that isn't all that far-fetched of a format Matt - pretty
> > common
> > >> anytime anyone wants to do sortable lat/long (*cough* three letter
> > >> agencies
> > >> cough*).
> > >>
> > >> Wouldn't we get the same by providing a simple set of libraries (ala
> > >> orderly + other HBase useful things) and then still give access to the
> > >> underlying byte array? Perhaps a nullable key type in that lib makes
> > sense
> > >> if lots of people need it and it would be nice to have standard
> > libraries
> > >> so tools could interop much more easily.
> > >> ---
> > >> Jesse Yates
> > >> @jesse_yates
> > >> jyates.github.com
> > >>
> > >>
> > >> On Mon, Apr 1, 2013 at 11:17 PM, Matt Corgan 
> > wrote:
> > >>
> > >>  Ah, I didn't even realize sql allowed null key parts.  Maybe a goal
> of
> > >>> the
> > >>> interfaces should be to provide first-class support for custom user
> > types
> > >>> in addition to the standard ones included.  Part of the power of
> > hbase's
> > >>> plain byte[] keys is that users can concoct the perfect key for their
> > >>> data
> > >>> type.  For example, I have a lot of geographic data where I
> interleave
> > >>> latitude/longitude bits into a sortable 64 bit value that would
> > probably
> > >>> never be included in a standard library.
> > >>>
> > >>>
> > >>> On Mon, Apr 1, 2013 at 8:38 PM, Enis Söztutar 
> > >>> wrote:
> > >>>
> > >>>  I think having Int32, and NullableInt32 would support minimum
> > overhead,
> > 
> > >>> as
> > >>>
> >  well as allowing SQL semantics.
> > 
> > 
> >  On Mon, Apr 1, 2013 at 7:26 PM, Nick Dimiduk 
> >  wrote:
> > 
> >   Furthermore, is is more important to support null values than
> squeeze
> > >
> >  all
> > >>>
> >  representations into minimum size (4-bytes for int32, &c.)?
> > > On Apr 1, 2013 4:41 PM, "Nick Dimiduk"  wrote:
> > >
> > >  On Mon, Apr 1, 2013 at 4:31 PM, James Taylor <
> > jtay...@salesforce.com
> > >> wrote:
> > >>
> > >>   From the SQL perspective, handling null is important.
> > >>>
> > >>
> > >>  From your perspective, it is critical to support NULLs, even at
> the
> > >> expense of fixed-width encodings at all or supporting
> representation
> > >>
> > > of a
> > 
> > > full range of values. That is, you'd rather be able to represent
> NULL
> > >>
> > > than
> > >
> > >> -2^31?
> > >>
> > >> On 04/01/2013 01:32 PM, Nick Dimiduk wrote:
> > >>
> > >>> Thanks for the thoughtful response (and code!).
> > 
> >  I'm thinking I will press forward with a base implementation
> that
> > 
> > >>> does
> > 
> > >  not
> >  support nulls. The idea is to provide an extensible set of
> > 

[jira] [Updated] (HIVE-2340) optimize orderby followed by a groupby

2013-04-04 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-2340:
--

Attachment: HIVE-2340.D1209.15.patch

navis updated the revision "HIVE-2340 [jira] optimize orderby followed by a 
groupby".

  Keep schema of RS intact in CP

Reviewers: hagleitn, JIRA

REVISION DETAIL
  https://reviews.facebook.net/D1209

CHANGE SINCE LAST DIFF
  https://reviews.facebook.net/D1209?vs=29571&id=31179#toc

BRANCH
  DPAL-592

ARCANIST PROJECT
  hive

AFFECTED FILES
  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
  conf/hive-default.xml.template
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/ColumnPrunerProcFactory.java
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/MapJoinProcessor.java
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/NonBlockingOpDeDupProc.java
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/ReduceSinkDeDuplication.java
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/CommonJoinResolver.java
  
ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/SkewJoinProcFactory.java
  ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeDescUtils.java
  ql/src/java/org/apache/hadoop/hive/ql/plan/JoinDesc.java
  ql/src/test/queries/clientpositive/auto_join26.q
  ql/src/test/queries/clientpositive/groupby_distinct_samekey.q
  ql/src/test/queries/clientpositive/reduce_deduplicate.q
  ql/src/test/queries/clientpositive/reduce_deduplicate_extended.q
  ql/src/test/results/clientpositive/cluster.q.out
  ql/src/test/results/clientpositive/groupby2.q.out
  ql/src/test/results/clientpositive/groupby2_map_skew.q.out
  ql/src/test/results/clientpositive/groupby_cube1.q.out
  ql/src/test/results/clientpositive/groupby_distinct_samekey.q.out
  ql/src/test/results/clientpositive/groupby_rollup1.q.out
  ql/src/test/results/clientpositive/index_bitmap3.q.out
  ql/src/test/results/clientpositive/index_bitmap_auto.q.out
  ql/src/test/results/clientpositive/infer_bucket_sort.q.out
  ql/src/test/results/clientpositive/ppd2.q.out
  ql/src/test/results/clientpositive/ppd_gby_join.q.out
  ql/src/test/results/clientpositive/reduce_deduplicate_extended.q.out
  ql/src/test/results/clientpositive/semijoin.q.out
  ql/src/test/results/clientpositive/union24.q.out
  ql/src/test/results/compiler/plan/join1.q.xml
  ql/src/test/results/compiler/plan/join2.q.xml
  ql/src/test/results/compiler/plan/join3.q.xml

To: JIRA, hagleitn, navis
Cc: hagleitn, njain


> optimize orderby followed by a groupby
> --
>
> Key: HIVE-2340
> URL: https://issues.apache.org/jira/browse/HIVE-2340
> Project: Hive
>  Issue Type: Sub-task
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
>  Labels: perfomance
> Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2340.D1209.1.patch, 
> ASF.LICENSE.NOT.GRANTED--HIVE-2340.D1209.2.patch, 
> ASF.LICENSE.NOT.GRANTED--HIVE-2340.D1209.3.patch, 
> ASF.LICENSE.NOT.GRANTED--HIVE-2340.D1209.4.patch, 
> ASF.LICENSE.NOT.GRANTED--HIVE-2340.D1209.5.patch, HIVE-2340.12.patch, 
> HIVE-2340.13.patch, HIVE-2340.14.patch, 
> HIVE-2340.14.rebased_and_schema_clone.patch, HIVE-2340.1.patch.txt, 
> HIVE-2340.D1209.10.patch, HIVE-2340.D1209.11.patch, HIVE-2340.D1209.12.patch, 
> HIVE-2340.D1209.13.patch, HIVE-2340.D1209.14.patch, HIVE-2340.D1209.15.patch, 
> HIVE-2340.D1209.6.patch, HIVE-2340.D1209.7.patch, HIVE-2340.D1209.8.patch, 
> HIVE-2340.D1209.9.patch, testclidriver.txt
>
>
> Before implementing optimizer for JOIN-GBY, try to implement RS-GBY 
> optimizer(cluster-by following group-by).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2340) optimize orderby followed by a groupby

2013-04-04 Thread Navis (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623113#comment-13623113
 ] 

Navis commented on HIVE-2340:
-

Got it. I've updated patch (added small comment on this). Thanks.

> optimize orderby followed by a groupby
> --
>
> Key: HIVE-2340
> URL: https://issues.apache.org/jira/browse/HIVE-2340
> Project: Hive
>  Issue Type: Sub-task
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
>  Labels: perfomance
> Attachments: ASF.LICENSE.NOT.GRANTED--HIVE-2340.D1209.1.patch, 
> ASF.LICENSE.NOT.GRANTED--HIVE-2340.D1209.2.patch, 
> ASF.LICENSE.NOT.GRANTED--HIVE-2340.D1209.3.patch, 
> ASF.LICENSE.NOT.GRANTED--HIVE-2340.D1209.4.patch, 
> ASF.LICENSE.NOT.GRANTED--HIVE-2340.D1209.5.patch, HIVE-2340.12.patch, 
> HIVE-2340.13.patch, HIVE-2340.14.patch, 
> HIVE-2340.14.rebased_and_schema_clone.patch, HIVE-2340.1.patch.txt, 
> HIVE-2340.D1209.10.patch, HIVE-2340.D1209.11.patch, HIVE-2340.D1209.12.patch, 
> HIVE-2340.D1209.13.patch, HIVE-2340.D1209.14.patch, HIVE-2340.D1209.15.patch, 
> HIVE-2340.D1209.6.patch, HIVE-2340.D1209.7.patch, HIVE-2340.D1209.8.patch, 
> HIVE-2340.D1209.9.patch, testclidriver.txt
>
>
> Before implementing optimizer for JOIN-GBY, try to implement RS-GBY 
> optimizer(cluster-by following group-by).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Hive-trunk-hadoop2 - Build # 138 - Still Failing

2013-04-04 Thread Apache Jenkins Server
Changes for Build #105
[hashutosh] HIVE-3918 : Normalize more CRLF line endings (Mark Grover via 
Ashutosh Chauhan)

[namit] HIVE-3917 Support noscan operation for analyze command
(Gang Tim Liu via namit)


Changes for Build #106
[namit] HIVE-3937 Hive Profiler
(Pamela Vagata via namit)

[hashutosh] HIVE-3571 : add a way to run a small unit quickly (Navis via 
Ashutosh Chauhan)

[hashutosh] HIVE-3956 : TestMetaStoreAuthorization always uses the same port 
(Navis via Ashutosh Chauhan)


Changes for Build #107

Changes for Build #108

Changes for Build #109

Changes for Build #110
[namit] HIVE-2839 Filters on outer join with mapjoin hint is not applied 
correctly
(Navis via namit)


Changes for Build #111

Changes for Build #112
[namit] HIVE-3998 Oracle metastore update script will fail when upgrading from 
0.9.0 to
0.10.0 (Jarek and Mark via namit)

[namit] HIVE-3999 Mysql metastore upgrade script will end up with different 
schema than
the full schema load (Jarek and Mark via namit)


Changes for Build #113

Changes for Build #114
[namit] HIVE-3995 PostgreSQL upgrade scripts are not valid
(Jarek and Mark via namit)


Changes for Build #115

Changes for Build #116
[namit] HIVE-4001 Add o.a.h.h.serde.Constants for backward compatibility
(Navis via namit)


Changes for Build #117

Changes for Build #118

Changes for Build #119

Changes for Build #120
[kevinwilfong] HIVE-3252. Add environment context to metastore Thrift calls. 
(Samuel Yuan via kevinwilfong)


Changes for Build #121

Changes for Build #122

Changes for Build #123

Changes for Build #124

Changes for Build #125

Changes for Build #126
[hashutosh] HIVE-4000 Hive client goes into infinite loop at 100% cpu (Owen 
Omalley via Ashutosh Chauhan)


Changes for Build #127
[namit] HIVE-4021 PostgreSQL upgrade scripts are creating column with incorrect 
name
(Jarek Jarcec Cecho via namit)

[hashutosh] HIVE-4033 : NPE at runtime while selecting virtual column after 
joining three tables on different keys (Ashutosh Chauhan)

[namit] HIVE-4029 Hive Profiler dies with NPE
(Brock Noland via namit)


Changes for Build #128
[namit] HIVE-4023 Improve Error Logging in MetaStore
(Bhushan Mandhani via namit)

[namit] HIVE-3403 user should not specify mapjoin to perform sort-merge 
bucketed join
(Namit Jain via Ashutosh)

[namit] HIVE-4024 Derby metastore update script will fail when upgrading from 
0.9.0
to 0.10.0 (Jarek Jarcec Cecho via namit)


Changes for Build #129

Changes for Build #130
[namit] HIVE-4027 Thrift alter_table api doesnt validate column type
(Gang Tim Liu via namit)

[namit] HIVE-4039 Hive compiler sometimes fails in semantic analysis / 
optimisation stage when boolean
variable appears in WHERE clause. (Jezn Xu via namit)

[namit] HIVE-4004 Incorrect status for AddPartition metastore event if RawStore 
commit fails
(Dilip Joseph via namit)


Changes for Build #131

Changes for Build #132
[namit] HIVE-3970 Clean up/fix PartitionNameWhitelistPreEventListener
(Kevin Wilfong via namit)

[namit] HIVE-3741 Driver.validateConfVariables() should perform more validations
(Gang Tim Liu via namit)


Changes for Build #133
[kevinwilfong] HIVE-701. lots of reserved keywords in hive. (Samuel Yuan via 
kevinwilfong)

[namit] HIVE-3710 HiveConf.ConfVars.HIVE_STATS_COLLECT_RAWDATASIZE should not be
checked in FileSinkOperator (Gang Tim Liu via namit)

[hashutosh] HIVE-3788 : testCliDriver_repair fails on hadoop-1 (Gunther 
Hagleitner via Ashutosh Chauhan)

[hashutosh] HIVE-4016 : Remove init(fname) from TestParse.vm for each 
test(Navis via Ashutosh Chauhan)


Changes for Build #134
[namit] HIVE-4025 Add reflect UDF for member method invocation of column
(Navis via namit)


Changes for Build #135
[namit] HIVE-3672 Support altering partition column type in Hive
(Jingwei Lu via namit)


Changes for Build #136

Changes for Build #137

Changes for Build #138
[namit] HIVE-4289 HCatalog build fails when behind a firewall
(Samuel Yuan via namit)

[namit] HIVE-4281 add hive.map.groupby.sorted.testmode
(Namit via Gang Tim Liu)

[hashutosh] Moving hcatalog site outside of trunk

[hashutosh] Moving hcatalog branches outside of trunk

[hashutosh] HIVE-4259 : SEL operator created with missing columnExprMap for 
unions (Gunther Hagleitner via Ashutosh Chauhan)

[hashutosh] HIVE-4156 : need to add protobuf classes to hive-exec.jar (Owen 
Omalley via Ashutosh Chauhan)

[hashutosh] HIVE-3464 : Merging join tree may reorder joins which could be 
invalid (Navis via Ashutosh Chauhan)

[hashutosh] HIVE-4138 : ORC's union object inspector returns a type name that 
isn't parseable by TypeInfoUtils (Owen Omalley via Ashutosh Chauhan)

[cws] HIVE-4119. ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails with 
NPE if the table is empty (Shreepadma Venugopalan via cws)

[hashutosh] HIVE-4252 : hiveserver2 string representation of complex types are 
inconsistent with cli (Thejas Nair via Ashutosh Chauhan)

[hashutosh] HIVE-4179 : NonBlockingOpDeDup does not merge SEL operators 
correctly

[jira] [Commented] (HIVE-3820) Consider creating a literal like "D" or "BD" for representing Decimal type constants

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623080#comment-13623080
 ] 

Hudson commented on HIVE-3820:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-3820 Consider creating a literal like D or BD for representing Decimal 
type constants (Gunther Hagleitner via Ashutosh Chauhan) (Revision 1459298)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1459298
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java
* /hive/trunk/ql/src/test/queries/clientpositive/literal_decimal.q
* /hive/trunk/ql/src/test/results/clientpositive/literal_decimal.q.out
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/JavaBigDecimalObjectInspector.java


> Consider creating a literal like "D" or "BD" for representing Decimal type 
> constants
> 
>
> Key: HIVE-3820
> URL: https://issues.apache.org/jira/browse/HIVE-3820
> Project: Hive
>  Issue Type: Bug
>Reporter: Mark Grover
>Assignee: Gunther Hagleitner
> Fix For: 0.11.0
>
> Attachments: HIVE-3820.1.patch, HIVE-3820.2.patch, 
> HIVE-3820.D8823.1.patch
>
>
> When the HIVE-2693 gets committed, users are going to see this behavior:
> {code}
> hive> select cast(3.14 as decimal) from decimal_3 limit 1;
> 3.140124344978758017532527446746826171875
> {code}
> That's intuitively incorrect but is the case because 3.14 (double) is being 
> converted to BigDecimal because of which there is a precision mismatch.
> We should consider creating a new literal for expressing constants of Decimal 
> type as Gunther suggested in HIVE-2693.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4208) Clientpositive test parenthesis_star_by is non-deteministic

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623078#comment-13623078
 ] 

Hudson commented on HIVE-4208:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4208 Clientpositive test parenthesis_star_by is non-deteministic
(Mark Grover via namit) (Revision 1459729)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1459729
Files : 
* /hive/trunk/ql/src/test/queries/clientpositive/parenthesis_star_by.q
* /hive/trunk/ql/src/test/results/clientpositive/parenthesis_star_by.q.out


> Clientpositive test parenthesis_star_by is non-deteministic
> ---
>
> Key: HIVE-4208
> URL: https://issues.apache.org/jira/browse/HIVE-4208
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 0.10.0
>Reporter: Mark Grover
>Assignee: Mark Grover
> Fix For: 0.11.0
>
> Attachments: HIVE-4208.1.patch
>
>
> parenthesis_star_by is testing {{DISTRIBUTE BY}}; however, the order of rows 
> returned by {{DISTRIBUTE BY}} is not deterministic and results in failures 
> depending on Hadoop version.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4206) Sort merge join does not work for outer joins for 7 inputs

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623079#comment-13623079
 ] 

Hudson commented on HIVE-4206:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4206 Sort merge join does not work for outer joins for 7 inputs
(Namit via Gang Tim Liu) (Revision 1459405)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1459405
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/test/queries/clientpositive/smb_mapjoin_17.q
* /hive/trunk/ql/src/test/results/clientpositive/smb_mapjoin_17.q.out


> Sort merge join does not work for outer joins for 7 inputs
> --
>
> Key: HIVE-4206
> URL: https://issues.apache.org/jira/browse/HIVE-4206
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 0.11.0
>Reporter: Namit Jain
>Assignee: Namit Jain
> Attachments: hive.4206.1.patch, hive.4206.2.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3490) Implement * or a.* for arguments to UDFs

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623077#comment-13623077
 ] 

Hudson commented on HIVE-3490:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-3490 Implement * or a.* for arguments to UDFs
(Navis via namit) (Revision 1452189)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1452189
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeColumnListDesc.java
* /hive/trunk/ql/src/test/queries/clientpositive/allcolref_in_udf.q
* /hive/trunk/ql/src/test/results/clientpositive/allcolref_in_udf.q.out


> Implement * or a.* for arguments to UDFs
> 
>
> Key: HIVE-3490
> URL: https://issues.apache.org/jira/browse/HIVE-3490
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor, UDF
>Reporter: Adam Kramer
>Assignee: Navis
> Fix For: 0.11.0
>
> Attachments: HIVE-3490.D8889.1.patch, HIVE-3490.D8889.2.patch
>
>
> For a random UDF, we should be able to use * or a.* to refer to "all of the 
> columns in their natural order." This is not currently implemented.
> I'm reporting this as a bug because it is a manner in which Hive is 
> inconsistent with the SQL spec, and because Hive claims to implement *.
> hive> select all_non_null(a.*) from table a where a.ds='2012-09-01';
> FAILED: ParseException line 1:25 mismatched input '*' expecting Identifier 
> near '.' in expression specification

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3348) semi-colon in comments in .q file does not work

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623075#comment-13623075
 ] 

Hudson commented on HIVE-3348:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-3348 semi-colon in comments in .q file does not work
(Nick Collins via namit) (Revision 1460990)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1460990
Files : 
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/QTestUtil.java
* /hive/trunk/ql/src/test/queries/clientpositive/semicolon.q
* /hive/trunk/ql/src/test/results/clientpositive/semicolon.q.out


> semi-colon in comments in .q file does not work
> ---
>
> Key: HIVE-3348
> URL: https://issues.apache.org/jira/browse/HIVE-3348
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Reporter: Namit Jain
>Assignee: Nick Collins
> Fix For: 0.11.0
>
> Attachments: hive-3348.patch
>
>
> -- comment ;
> -- comment
> select count(1) from src;
> The above test file fails

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4020) Swap applying order of CP and PPD

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623074#comment-13623074
 ] 

Hudson commented on HIVE-4020:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4020: Swap applying order of CP and PPD (Navis via Ashutosh Chauhan) 
(Revision 1452423)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1452423
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java
* /hive/trunk/ql/src/test/results/clientpositive/auto_join19.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join9.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketmapjoin1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketmapjoin_negative.q.out
* /hive/trunk/ql/src/test/results/clientpositive/filter_join_breaktask.q.out
* /hive/trunk/ql/src/test/results/clientpositive/index_auto_mult_tables.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/index_auto_mult_tables_compact.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input39_hadoop20.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join38.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join9.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join_map_ppr.q.out
* /hive/trunk/ql/src/test/results/clientpositive/lateral_view_ppd.q.out
* /hive/trunk/ql/src/test/results/clientpositive/louter_join_ppr.q.out
* /hive/trunk/ql/src/test/results/clientpositive/ppd_repeated_alias.q.out
* /hive/trunk/ql/src/test/results/clientpositive/router_join_ppr.q.out
* /hive/trunk/ql/src/test/results/clientpositive/smb_mapjoin9.q.out
* /hive/trunk/ql/src/test/results/clientpositive/sort_merge_join_desc_1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/sort_merge_join_desc_2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/sort_merge_join_desc_3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/sort_merge_join_desc_4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/stats11.q.out
* /hive/trunk/ql/src/test/results/clientpositive/union26.q.out
* /hive/trunk/ql/src/test/results/compiler/plan/case_sensitivity.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/cast1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input6.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input9.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input_part1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/input_testxpath2.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join4.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join5.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join6.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join7.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/join8.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/sample7.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/subq.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/udf1.q.xml
* /hive/trunk/ql/src/test/results/compiler/plan/union.q.xml


> Swap applying order of CP and PPD
> -
>
> Key: HIVE-4020
> URL: https://issues.apache.org/jira/browse/HIVE-4020
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Fix For: 0.11.0
>
> Attachments: HIVE-4020.D8571.1.patch, HIVE-4020.D8571.2.patch
>
>
> Doing Hive-2340, I've found CP removed some column mapping needed for 
> backtracking expression desc. By swapping order of CP and PPD, the problem 
> was solved. 
> After that I've realized that CP on earlier stage is possible after PPD is 
> applied cause some columns on filter predicate are not selected and can be 
> removed right after the new pushed-down filter. For example, 
> (bucketmapjoin1.q)
> 
> select /*+mapjoin(b)*/ a.key, a.value, b.value
> from srcbucket_mapjoin_part a join srcbucket_mapjoin_part_2 b
> on a.key=b.key where b.ds="2008-04-08"
> 
> plan for hashtable sink operator is changed to 
> 
> HashTable Sink Operator
>   condition expressions:
> 0 {key} {value}
> 1 {value}
> 
> which was 
> 
> HashTable Sink Operator
>   condition expressions:
> 0 {key} {value}
> 1 {value} {ds}
> 
> HIVE-2340 seemed need more time for commit, so booked as an another issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4169) union_remove_*.q fail on hadoop 2

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623076#comment-13623076
 ] 

Hudson commented on HIVE-4169:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4169 : union_remove_*.q fail on hadoop 2 (Gunther Hagleitner via 
Ashutosh Chauhan) (Revision 1456737)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1456737
Files : 
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_11.q.out
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_14.q.out
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_15.q.out
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_16.q.out
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_17.q.out
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_18.q.out
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_19.q.out
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_20.q.out
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_21.q.out


> union_remove_*.q fail on hadoop 2
> -
>
> Key: HIVE-4169
> URL: https://issues.apache.org/jira/browse/HIVE-4169
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Fix For: 0.11.0
>
> Attachments: HIVE-4169.1.patch
>
>
> union_remove_1.q
> union_remove_11.q
> union_remove_14.q
> union_remove_15.q
> union_remove_16.q
> union_remove_17.q
> union_remove_18.q
> union_remove_19.q
> union_remove_2.q
> union_remove_20.q
> union_remove_21.q
> all fail on hadoop 2 (and only run on hadoop 2) because of outdated golden 
> files. The query plan has slightly changed (removing an unnecessary select 
> op). The query results are the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4162) disable TestBeeLineDriver

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623073#comment-13623073
 ] 

Hudson commented on HIVE-4162:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4162. disable TestBeeLineDriver. (Thejas M Nair via kevinwilfong) 
(Revision 1457117)

 Result = FAILURE
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1457117
Files : 
* /hive/trunk/build-common.xml
* /hive/trunk/build.properties


> disable TestBeeLineDriver
> -
>
> Key: HIVE-4162
> URL: https://issues.apache.org/jira/browse/HIVE-4162
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Fix For: 0.11.0
>
> Attachments: HIVE-4162.1.patch
>
>
> See HIVE-4161. We should disable the TestBeeLineDriver test cases. In its 
> current state, it was not supposed to be enabled by default.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4252) hiveserver2 string representation of complex types are inconsistent with cli

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623072#comment-13623072
 ] 

Hudson commented on HIVE-4252:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4252 : hiveserver2 string representation of complex types are 
inconsistent with cli (Thejas Nair via Ashutosh Chauhan) (Revision 1464049)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1464049
Files : 
* /hive/trunk/jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java


> hiveserver2 string representation of complex types are inconsistent with cli
> 
>
> Key: HIVE-4252
> URL: https://issues.apache.org/jira/browse/HIVE-4252
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.11.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Fix For: 0.11.0
>
> Attachments: HIVE-4252.1.patch, HIVE-4252.2.patch
>
>
> For example, it prints struct as "[null, null, null]" instead of  
> "{\"r\":null,\"s\":null,\"t\":null}"
> And for maps it is printing it as "{k=v}" instead of {\"k\":\"v\"}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3297) change hive.auto.convert.join's default value to true

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623071#comment-13623071
 ] 

Hudson commented on HIVE-3297:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-3297 : change hive.auto.convert.joins default value to true (Ashutosh 
Chauhan) (Revision 1453649)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1453649
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/data/conf/hive-site.xml


> change hive.auto.convert.join's default value to true
> -
>
> Key: HIVE-3297
> URL: https://issues.apache.org/jira/browse/HIVE-3297
> Project: Hive
>  Issue Type: Bug
>Reporter: Namit Jain
>Assignee: Ashutosh Chauhan
> Fix For: 0.11.0
>
> Attachments: HIVE-3297.patch
>
>
> For unit tests also, this parameter should be set to true.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4259) SEL operator created with missing columnExprMap for unions

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623070#comment-13623070
 ] 

Hudson commented on HIVE-4259:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4259 : SEL operator created with missing columnExprMap for unions 
(Gunther Hagleitner via Ashutosh Chauhan) (Revision 1464248)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1464248
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_14.q.out
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_24.q.out
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_5.q.out
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_8.q.out
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_9.q.out


> SEL operator created with missing columnExprMap for unions
> --
>
> Key: HIVE-4259
> URL: https://issues.apache.org/jira/browse/HIVE-4259
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
>Priority: Critical
> Fix For: 0.11.0
>
> Attachments: HIVE-4259.1.patch
>
>
> Causes failures in some union_remove*.q testcases

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4098) OrcInputFormat assumes Hive always calls createValue

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623069#comment-13623069
 ] 

Hudson commented on HIVE-4098:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4098 : OrcInputFormat assumes Hive always calls createValue (Owen 
Omalley via Ashutosh Chauhan) (Revision 1454454)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1454454
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java


> OrcInputFormat assumes Hive always calls createValue
> 
>
> Key: HIVE-4098
> URL: https://issues.apache.org/jira/browse/HIVE-4098
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Fix For: 0.11.0
>
> Attachments: HIVE-4098.D9021.1.patch
>
>
> Hive's HiveContextAwareRecordReader doesn't create a new value for each 
> InputFormat and instead reuses the same row between input formats. That 
> causes the first record of second (and third, etc.) partition to be dropped 
> and replaced with the last row of the previous partition.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4258) Log logical plan tree for debugging

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623067#comment-13623067
 ] 

Hudson commented on HIVE-4258:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4258 Log logical plan tree for debugging
(Navis via namit) (Revision 1462531)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1462531
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java


> Log logical plan tree for debugging
> ---
>
> Key: HIVE-4258
> URL: https://issues.apache.org/jira/browse/HIVE-4258
> Project: Hive
>  Issue Type: Improvement
>  Components: Diagnosability
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Fix For: 0.11.0
>
> Attachments: HIVE-4258.D9801.1.patch
>
>
> Debugging or implementing optimizer, knowing the shape of logical plan helps 
> general progress of it.
> For example,
> select count(val) from (select a.key as key, b.value as array_val from src a 
> join array_valued_src b on a.key=b.key) i lateral view explode (array_val) c 
> as val
> {noformat} 
> TS[1]-RS[2]-JOIN[4]-SEL[5]-LVF[6]-SEL[7]-LVJ[10]-SEL[11]-GBY[12]-RS[13]-GBY[14]-SEL[15]-FS[16]
>  -SEL[8]-UDTF[9]-LVJ[10]
> TS[0]-RS[3]-JOIN[4]
> {noformat} 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3980) Cleanup after HIVE-3403

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623068#comment-13623068
 ] 

Hudson commented on HIVE-3980:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-3980 : Cleanup after 3403 (Namit Jain via Ashutosh Chauhan) (Revision 
1461012)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1461012
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/AbstractBucketJoinProc.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/AbstractSMBJoinProc.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/BucketJoinProcCtx.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/MapJoinDesc.java


> Cleanup after HIVE-3403
> ---
>
> Key: HIVE-3980
> URL: https://issues.apache.org/jira/browse/HIVE-3980
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Namit Jain
>Assignee: Namit Jain
> Fix For: 0.11.0
>
> Attachments: hive.3980.1.patch, hive.3980.2.patch, hive.3980.3.patch, 
> hive.3980.4.patch
>
>
> There have been a lot of comments on HIVE-3403, which involve changing 
> variable names/function names/adding more comments/general cleanup etc.
> Since HIVE-3403 involves a lot of refactoring, it was fairly difficult to
> address the comments there, since refreshing becomes impossible. This jira
> is to track those cleanups.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4096) problem in hive.map.groupby.sorted with distincts

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623066#comment-13623066
 ] 

Hudson commented on HIVE-4096:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4096. problem in hive.map.groupby.sorted with distincts. (njain via 
kevinwilfong) (Revision 1455650)

 Result = FAILURE
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1455650
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GroupByOptimizer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/GroupByDesc.java
* /hive/trunk/ql/src/test/queries/clientpositive/groupby_sort_8.q
* /hive/trunk/ql/src/test/results/clientpositive/groupby_sort_8.q.out


> problem in hive.map.groupby.sorted with distincts
> -
>
> Key: HIVE-4096
> URL: https://issues.apache.org/jira/browse/HIVE-4096
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Namit Jain
>Assignee: Namit Jain
> Attachments: hive.4096.1.patch
>
>
> set hive.enforce.bucketing = true;
> set hive.enforce.sorting = true;
> set hive.exec.reducers.max = 10;
> set hive.map.groupby.sorted=true;
> CREATE TABLE T1(key STRING, val STRING) PARTITIONED BY (ds string)
> CLUSTERED BY (key) SORTED BY (key) INTO 2 BUCKETS STORED AS TEXTFILE;
> LOAD DATA LOCAL INPATH '../data/files/T1.txt' INTO TABLE T1 PARTITION 
> (ds='1');
> -- perform an insert to make sure there are 2 files
> INSERT OVERWRITE TABLE T1 PARTITION (ds='1') select key, val from T1 where ds 
> = '1';
> CREATE TABLE outputTbl1(cnt INT);
> -- The plan should be converted to a map-side group by, since the
> -- sorting columns and grouping columns match, and all the bucketing columns
> -- are part of sorting columns
> EXPLAIN
> select count(distinct key) from T1;
> select count(distinct key) from T1;
> explain
> INSERT OVERWRITE TABLE outputTbl1
> select count(distinct key) from T1;
> INSERT OVERWRITE TABLE outputTbl1
> select count(distinct key) from T1;
> SELECT * FROM outputTbl1;
> DROP TABLE T1;
> The above query gives wrong results

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4094) decimal_3.q & decimal_serde.q fail on hadoop 2

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623064#comment-13623064
 ] 

Hudson commented on HIVE-4094:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4094 : decimal_3.q & decimal_serde.q fail on hadoop 2 (Gunther 
Hagleitner via Ashutosh Chauhan) (Revision 1455270)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1455270
Files : 
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_3.q
* /hive/trunk/ql/src/test/queries/clientpositive/decimal_serde.q
* /hive/trunk/ql/src/test/results/clientpositive/decimal_3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/decimal_serde.q.out


> decimal_3.q & decimal_serde.q fail on hadoop 2
> --
>
> Key: HIVE-4094
> URL: https://issues.apache.org/jira/browse/HIVE-4094
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Fix For: 0.11.0
>
> Attachments: HIVE-4094.patch
>
>
> Some of the decimal unit tests fail on hadoop 2. The reason is unspecified 
> order in some of the queries.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4092) Store complete names of tables in column access analyzer

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623063#comment-13623063
 ] 

Hudson commented on HIVE-4092:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4092. Store complete names of tables in column access analyzer (Samuel 
Yuan via kevinwilfong) (Revision 1459905)

 Result = FAILURE
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1459905
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ColumnAccessAnalyzer.java
* /hive/trunk/ql/src/test/results/clientpositive/column_access_stats.q.out


> Store complete names of tables in column access analyzer
> 
>
> Key: HIVE-4092
> URL: https://issues.apache.org/jira/browse/HIVE-4092
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.11.0
>Reporter: Samuel Yuan
>Assignee: Samuel Yuan
>Priority: Trivial
> Fix For: 0.11.0
>
> Attachments: HIVE-4092.HIVE-4092.HIVE-4092.D8985.1.patch
>
>
> Right now the db name is not being stored. We should store the complete name, 
> which includes the db name, as the table access analyzer does.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4097) ORC file doesn't properly interpret empty hive.io.file.readcolumn.ids

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623065#comment-13623065
 ] 

Hudson commented on HIVE-4097:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4097 : ORC file doesnt properly interpret empty 
hive.io.file.readcolumn.ids (Owen Omalley via Ashutosh Chauhan) (Revision 
1454453)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1454453
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestInputOutputFormat.java


> ORC file doesn't properly interpret empty hive.io.file.readcolumn.ids
> -
>
> Key: HIVE-4097
> URL: https://issues.apache.org/jira/browse/HIVE-4097
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Fix For: 0.11.0
>
> Attachments: HIVE-4097.D9015.1.patch
>
>
> Hive assumes that an empty string in hive.io.file.readcolumn.ids means all 
> columns. The ORC reader currently assumes it means no columns.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4212) sort merge join should work for outer joins for more than 8 inputs

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623062#comment-13623062
 ] 

Hudson commented on HIVE-4212:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4212 sort merge join should work for outer joins for more than 8 inputs
(Namit via Gang Tim Liu) (Revision 1460988)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1460988
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/CommonJoinOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/HashTableSinkOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/JoinUtil.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/MapJoinProcessor.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/test/results/clientpositive/smb_mapjoin_17.q.out


> sort merge join should work for outer joins for more than 8 inputs
> --
>
> Key: HIVE-4212
> URL: https://issues.apache.org/jira/browse/HIVE-4212
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Namit Jain
>Assignee: Namit Jain
> Fix For: 0.11.0
>
> Attachments: hive.4212.1.patch, hive.4212.2.patch, hive.4212.3.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4090) Use of hive.exec.script.allow.partial.consumption can produce partial results

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623060#comment-13623060
 ] 

Hudson commented on HIVE-4090:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4090 Use of hive.exec.script.allow.partial.consumption can produce 
partial
results (Kevin Wilfong via namit) (Revision 1451171)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1451171
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ScriptOperator.java


> Use of hive.exec.script.allow.partial.consumption can produce partial results
> -
>
> Key: HIVE-4090
> URL: https://issues.apache.org/jira/browse/HIVE-4090
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.11.0
>Reporter: Kevin Wilfong
>Assignee: Kevin Wilfong
> Fix For: 0.11.0
>
> Attachments: HIVE-4090.1.patch.txt
>
>
> When users execute use a transform script with the config 
> hive.exec.script.allow.partial.consumption set to true, it may produce 
> partial results.
> When this config is set the script may close it's input pipe before its 
> parent operator has finished passing it rows.  In the catch block for this 
> exception, the setDone method is called marking the operator as done.  
> However, there's a separate thread running to process rows passed from the 
> script back to Hive via stdout.  If this thread is not done processing rows, 
> any rows it forwards after the setDone method is called will not be passed to 
> its children.  This leads to partial results.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4056) Extend rcfilecat to support (un)compressed size and no. of row

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623061#comment-13623061
 ] 

Hudson commented on HIVE-4056:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4056 Extend rcfilecat to support (un)compressed size and no. of row
Gang Tim Liu via namit) (Revision 1451130)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1451130
Files : 
* /hive/trunk/cli/src/java/org/apache/hadoop/hive/cli/RCFileCat.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/RCFile.java


> Extend rcfilecat to support (un)compressed size and no. of row
> --
>
> Key: HIVE-4056
> URL: https://issues.apache.org/jira/browse/HIVE-4056
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Reporter: Gang Tim Liu
>Assignee: Gang Tim Liu
> Fix For: 0.11.0
>
> Attachments: HIVE-4056.patch.1
>
>
> rcfilecat supports data and metadata:
> https://cwiki.apache.org/Hive/rcfilecat.html
> In metadata, it supports column statistics.
> It will be natural to extend metadata support to 
> 1. no. of rows 
> 2. uncompressed size for the file
> 3. compressed size for the file

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-2264) Hive server is SHUTTING DOWN when invalid queries beeing executed.

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623059#comment-13623059
 ] 

Hudson commented on HIVE-2264:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-2264 Hive server is SHUTTING DOWN when invalid queries beeing executed 
(Revision 1462406)

 Result = FAILURE
navis : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1462406
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/Driver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ExecDriver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapRedTask.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapredLocalTask.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Task.java


> Hive server is SHUTTING DOWN when invalid queries beeing executed.
> --
>
> Key: HIVE-2264
> URL: https://issues.apache.org/jira/browse/HIVE-2264
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, Query Processor
>Affects Versions: 0.9.0
> Environment: SuSE-Linux-11
>Reporter: rohithsharma
>Assignee: Navis
>Priority: Blocker
> Fix For: 0.11.0
>
> Attachments: HIVE-2264.1.patch.txt, HIVE-2264-2.patch, 
> HIVE-2264.D9489.1.patch
>
>
> When invalid query is beeing executed, Hive server is shutting down.
> {noformat}
> "CREATE TABLE SAMPLETABLE(IP STRING , showtime BIGINT ) partitioned by (ds 
> string,ipz int) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\040'"
> "ALTER TABLE SAMPLETABLE add Partition(ds='sf') location 
> '/user/hive/warehouse' Partition(ipz=100) location '/user/hive/warehouse'"
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4156) need to add protobuf classes to hive-exec.jar

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623056#comment-13623056
 ] 

Hudson commented on HIVE-4156:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4156 : need to add protobuf classes to hive-exec.jar (Owen Omalley via 
Ashutosh Chauhan) (Revision 1464245)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1464245
Files : 
* /hive/trunk/ql/build.xml


> need to add protobuf classes to hive-exec.jar
> -
>
> Key: HIVE-4156
> URL: https://issues.apache.org/jira/browse/HIVE-4156
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Fix For: 0.11.0
>
> Attachments: HIVE-4156.D9375.1.patch, HIVE-4156.D9375.2.patch
>
>
> In some queries, the tasks fail when they can't find classes from the 
> protobuf library.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4219) explain dependency does not capture the input table

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623058#comment-13623058
 ] 

Hudson commented on HIVE-4219:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4219 explain dependency does not capture the input table
(Namit via Gang Tim Liu) (Revision 1460971)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1460971
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/SimpleFetchOptimizer.java
* /hive/trunk/ql/src/test/queries/clientpositive/explain_dependency2.q
* /hive/trunk/ql/src/test/results/clientnegative/alter_partition_offline.q.out
* /hive/trunk/ql/src/test/results/clientnegative/protectmode_part.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter5.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/alter_partition_protect_mode.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter_rename_partition.q.out
* /hive/trunk/ql/src/test/results/clientpositive/escape1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/escape2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/exim_02_00_part_empty.q.out
* /hive/trunk/ql/src/test/results/clientpositive/exim_02_part.q.out
* /hive/trunk/ql/src/test/results/clientpositive/exim_04_all_part.q.out
* /hive/trunk/ql/src/test/results/clientpositive/exim_04_evolved_parts.q.out
* /hive/trunk/ql/src/test/results/clientpositive/exim_05_some_part.q.out
* /hive/trunk/ql/src/test/results/clientpositive/exim_06_one_part.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/exim_07_all_part_over_nonoverlap.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/exim_09_part_spec_nonoverlap.q.out
* /hive/trunk/ql/src/test/results/clientpositive/exim_15_external_part.q.out
* /hive/trunk/ql/src/test/results/clientpositive/exim_16_part_external.q.out
* /hive/trunk/ql/src/test/results/clientpositive/exim_17_part_managed.q.out
* /hive/trunk/ql/src/test/results/clientpositive/exim_18_part_external.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/exim_19_00_part_external_location.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/exim_19_part_external_location.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/exim_20_part_managed_location.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/exim_23_import_part_authsuccess.q.out
* /hive/trunk/ql/src/test/results/clientpositive/explain_dependency2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/filter_join_breaktask2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/groupby11.q.out
* /hive/trunk/ql/src/test/results/clientpositive/index_bitmap.q.out
* /hive/trunk/ql/src/test/results/clientpositive/index_compact.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input12_hadoop20.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input13.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input28.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input_part0.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input_part10.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input_part3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input_part4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input_part8.q.out
* /hive/trunk/ql/src/test/results/clientpositive/insertexternal1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/load_dyn_part1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/load_dyn_part10.q.out
* /hive/trunk/ql/src/test/results/clientpositive/load_dyn_part11.q.out
* /hive/trunk/ql/src/test/results/clientpositive/load_dyn_part12.q.out
* /hive/trunk/ql/src/test/results/clientpositive/load_dyn_part13.q.out
* /hive/trunk/ql/src/test/results/clientpositive/load_dyn_part3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/load_dyn_part4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/load_dyn_part5.q.out
* /hive/trunk/ql/src/test/results/clientpositive/load_dyn_part6.q.out
* /hive/trunk/ql/src/test/results/clientpositive/load_dyn_part7.q.out
* /hive/trunk/ql/src/test/results/clientpositive/load_dyn_part8.q.out
* /hive/trunk/ql/src/test/results/clientpositive/load_dyn_part9.q.out
* /hive/trunk/ql/src/test/results/clientpositive/loadpart1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/merge4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/mi.q.out
* /hive/trunk/ql/src/test/results/clientpositive/nonmr_fetch.q.out
* /hive/trunk/ql/src/test/results/clientpositive/null_column.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/partition_wise_fileformat10.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/partition_wise_fileformat11.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/partition_wise_fileformat12.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/partition_wise_file

[jira] [Commented] (HIVE-4155) Expose ORC's FileDump as a service

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623057#comment-13623057
 ] 

Hudson commented on HIVE-4155:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4155: Expose ORC's FileDump as a service (Revision 1462352)

 Result = FAILURE
gangtimliu : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1462352
Files : 
* /hive/trunk/bin/ext/orcfiledump.sh
* /hive/trunk/bin/hive


> Expose ORC's FileDump as a service
> --
>
> Key: HIVE-4155
> URL: https://issues.apache.org/jira/browse/HIVE-4155
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 0.11.0
>Reporter: Kevin Wilfong
>Assignee: Kevin Wilfong
> Fix For: 0.11.0
>
> Attachments: HIVE-4155.1.patch.txt
>
>
> Expose ORC's FileDump class as a service similar to RC File Cat
> e.g.
> hive --orcfiledump 
> Should run FileDump on the file.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4154) NPE reading column of empty string from ORC file

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623054#comment-13623054
 ] 

Hudson commented on HIVE-4154:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4154 NPE reading column of empty string from ORC file
(Kevin Wilfong via namit) (Revision 1458570)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1458570
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/RecordReaderImpl.java
* /hive/trunk/ql/src/test/queries/clientpositive/orc_empty_strings.q
* /hive/trunk/ql/src/test/results/clientpositive/orc_empty_strings.q.out


> NPE reading column of empty string from ORC file
> 
>
> Key: HIVE-4154
> URL: https://issues.apache.org/jira/browse/HIVE-4154
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Affects Versions: 0.11.0
>Reporter: Kevin Wilfong
>Assignee: Kevin Wilfong
> Fix For: 0.11.0
>
> Attachments: HIVE-4154.1.patch.txt, HIVE-4154.2.patch.txt
>
>
> If a String column contains only empty strings, a null pointer exception is 
> throws from the RecordReaderImpl for ORC.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4217) Fix show_create_table_*.q test failures

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623055#comment-13623055
 ] 

Hudson commented on HIVE-4217:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4217. Fix show_create_table_*.q test failures (Carl Steinbach via cws) 
(Revision 1459569)

 Result = FAILURE
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1459569
Files : 
* /hive/trunk/eclipse-templates/.classpath
* /hive/trunk/ivy/libraries.properties
* /hive/trunk/metastore/ivy.xml
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java


> Fix show_create_table_*.q test failures
> ---
>
> Key: HIVE-4217
> URL: https://issues.apache.org/jira/browse/HIVE-4217
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Carl Steinbach
>Assignee: Carl Steinbach
> Fix For: 0.11.0
>
> Attachments: HIVE-4217.1.patch.txt, HIVE-4217.2.patch.txt
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4119) ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails with NPE if the table is empty

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623052#comment-13623052
 ] 

Hudson commented on HIVE-4119:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4119. ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails with NPE 
if the table is empty (Shreepadma Venugopalan via cws) (Revision 1464208)

 Result = FAILURE
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1464208
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ColumnStatsTask.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ColumnStatsWork.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFComputeStats.java
* /hive/trunk/ql/src/test/queries/clientpositive/columnstats_tbllvl.q
* /hive/trunk/ql/src/test/queries/clientpositive/compute_stats_empty_table.q
* /hive/trunk/ql/src/test/results/clientpositive/columnstats_tbllvl.q.out
* /hive/trunk/ql/src/test/results/clientpositive/compute_stats_empty_table.q.out


> ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails with NPE if the table 
> is empty
> -
>
> Key: HIVE-4119
> URL: https://issues.apache.org/jira/browse/HIVE-4119
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Affects Versions: 0.10.0
>Reporter: Lenni Kuff
>Assignee: Shreepadma Venugopalan
>Priority: Critical
> Fix For: 0.11.0
>
> Attachments: HIVE-4119.1.patch, HIVE-4119.2.patch
>
>
> ANALYZE TABLE ... COMPUTE STATISTICS FOR COLUMNS fails with NPE if the table 
> is empty
> {code}
> hive -e "create table empty_table (i int); select compute_stats(i, 16) from 
> empty_table"
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.WritableIntObjectInspector.get(WritableIntObjectInspector.java:35)
>   at 
> org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorUtils.getInt(PrimitiveObjectInspectorUtils.java:535)
>   at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDAFComputeStats$GenericUDAFLongStatsEvaluator.iterate(GenericUDAFComputeStats.java:477)
>   at 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDAFEvaluator.aggregate(GenericUDAFEvaluator.java:139)
>   at 
> org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1099)
>   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:558)
>   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:567)
>   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:567)
>   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:567)
>   at org.apache.hadoop.hive.ql.exec.ExecMapper.close(ExecMapper.java:193)
>   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
>   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:428)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>   at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:231)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.exec.GroupByOperator.closeOp(GroupByOperator.java:1132)
>   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:558)
>   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:567)
>   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:567)
>   at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:567)
>   at org.apache.hadoop.hive.ql.exec.ExecMapper.close(ExecMapper.java:193)
>   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61)
>   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:428)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
>   at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:231)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runT

[jira] [Commented] (HIVE-4159) RetryingHMSHandler doesn't retry in enough cases

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623053#comment-13623053
 ] 

Hudson commented on HIVE-4159:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4159:RetryingHMSHandler doesn't retry in enough cases (Kevin Wilfong 
vi Gang Tim Liu) (Revision 1462350)

 Result = FAILURE
gangtimliu : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1462350
Files : 
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/RetryingHMSHandler.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/AlternateFailurePreListener.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestRetryingHMSHandler.java


> RetryingHMSHandler doesn't retry in enough cases
> 
>
> Key: HIVE-4159
> URL: https://issues.apache.org/jira/browse/HIVE-4159
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.11.0
>Reporter: Kevin Wilfong
>Assignee: Kevin Wilfong
> Fix For: 0.11.0
>
> Attachments: HIVE-4159.1.patch.txt
>
>
> HIVE-3524 introduced a change which caused JDOExceptions to be wrapped in 
> MetaExceptions.  This caused the RetryingHMSHandler to not retry on these 
> exceptions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4015) Add ORC file to the grammar as a file format

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623051#comment-13623051
 ] 

Hudson commented on HIVE-4015:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4015. Add ORC file to the grammar as a file format. (Gunther 
Hagleitner via kevinwilfong) (Revision 1459030)

 Result = FAILURE
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1459030
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g
* /hive/trunk/ql/src/test/queries/clientpositive/orc_create.q
* /hive/trunk/ql/src/test/queries/clientpositive/orc_createas1.q
* /hive/trunk/ql/src/test/results/clientpositive/orc_create.q.out
* /hive/trunk/ql/src/test/results/clientpositive/orc_createas1.q.out


> Add ORC file to the grammar as a file format
> 
>
> Key: HIVE-4015
> URL: https://issues.apache.org/jira/browse/HIVE-4015
> Project: Hive
>  Issue Type: Improvement
>Reporter: Owen O'Malley
>Assignee: Gunther Hagleitner
> Fix For: 0.11.0
>
> Attachments: HIVE-4015.1.patch, HIVE-4015.2.patch, HIVE-4015.3.patch, 
> HIVE-4015.4.patch, HIVE-4015.5.patch
>
>
> It would be much more convenient for users if we enable them to use ORC as a 
> file format in the HQL grammar. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4157) ORC runs out of heap when writing

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623049#comment-13623049
 ] 

Hudson commented on HIVE-4157:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4157: ORC runs out of heap when writing (Kevin Wilfong vi Gang Tim 
Liu) (Revision 1462363)

 Result = FAILURE
gangtimliu : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1462363
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OutStream.java


> ORC runs out of heap when writing
> -
>
> Key: HIVE-4157
> URL: https://issues.apache.org/jira/browse/HIVE-4157
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Affects Versions: 0.11.0
>Reporter: Kevin Wilfong
>Assignee: Kevin Wilfong
> Fix For: 0.11.0
>
> Attachments: HIVE-4157.1.patch.txt
>
>
> The OutStream class used by the ORC file format seems to aggressively 
> allocate memory for ByteBuffers and doesn't seem too eager to give it back.
> This causes issues with heap space, particularly when a wide tables/dynamic 
> partitions are involved.
> As a first step to resolving this problem, the OutStream class can be 
> modified to lazily allocate memory, and more actively make it available for 
> garbage collection.
> Follow ups could include checking the amount of free memory as part of 
> determining if a spill is needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3381) Result of outer join is not valid

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623047#comment-13623047
 ] 

Hudson commented on HIVE-3381:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-3381 : Result of outer join is not valid (Navis via Ashutosh Chauhan) 
(Revision 1461234)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1461234
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/CommonJoinOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/HashTableSinkOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/JoinOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/JoinUtil.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapJoinOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/SMBMapJoinOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/MapJoinObjectValue.java
* /hive/trunk/ql/src/test/queries/clientpositive/mapjoin_test_outer.q
* /hive/trunk/ql/src/test/results/clientpositive/auto_join21.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join29.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join7.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join_filters.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join21.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join7.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join_1to1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join_filters.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join_filters_overlap.q.out
* /hive/trunk/ql/src/test/results/clientpositive/mapjoin1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/mapjoin_test_outer.q.out


> Result of outer join is not valid
> -
>
> Key: HIVE-3381
> URL: https://issues.apache.org/jira/browse/HIVE-3381
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.10.0
>Reporter: Navis
>Assignee: Navis
>Priority: Critical
> Fix For: 0.11.0
>
> Attachments: HIVE-3381.D5565.3.patch, HIVE-3381.D5565.4.patch, 
> HIVE-3381.D5565.5.patch, HIVE-3381.D5565.6.patch, HIVE-3381.D5565.7.patch, 
> mapjoin_testOuter.q
>
>
> Outer joins, especially full outer joins or outer join with filter on 'ON 
> clause' is not showing proper results. For example, query in test join_1to1.q
> {code}
> SELECT * FROM join_1to1_1 a full outer join join_1to1_2 b on a.key1 = b.key1 
> and a.value = 66 and b.value = 66 ORDER BY a.key1 ASC, a.key2 ASC, a.value 
> ASC, b.key1 ASC, b.key2 ASC, b.value ASC;
> {code}
> results
> {code}
> NULL  NULLNULLNULLNULL66
> NULL  NULLNULLNULL10050   66
> NULL  NULLNULL10  10010   66
> NULL  NULLNULL30  10030   88
> NULL  NULLNULL35  10035   88
> NULL  NULLNULL40  10040   88
> NULL  NULLNULL40  10040   88
> NULL  NULLNULL50  10050   88
> NULL  NULLNULL50  10050   88
> NULL  NULLNULL50  10050   88
> NULL  NULLNULL70  10040   88
> NULL  NULLNULL70  10040   88
> NULL  NULLNULL70  10040   88
> NULL  NULLNULL70  10040   88
> NULL  NULL66  NULLNULLNULL
> NULL  10050   66  NULLNULLNULL
> 5 10005   66  5   10005   66
> 1510015   66  NULLNULLNULL
> 2010020   66  20  10020   66
> 2510025   88  NULLNULLNULL
> 3010030   66  NULLNULLNULL
> 3510035   88  NULLNULLNULL
> 4010040   66  NULLNULLNULL
> 4010040   66  40  10040   66
> 4010040   88  NULLNULLNULL
> 4010040   88  NULLNULLNULL
> 5010050   66  NULLNULLNULL
> 5010050   66  50  10050   66
> 5010050   66  50  10050   66
> 5010050   88  NULLNULLNULL
> 5010050   88  NULLNULLNULL
> 5010050   88  NULLNULLNULL
> 5010050   88  NULLNULLNULL
> 5010050   88  NULLNULLNULL
> 5010050   88  NULLNULLNULL
> 6010040   66  60  10040   66
> 6010040   66  60  10040   66
> 6010040   66  60  10040   66
> 6010040   66  60  10040   66
> 7010040   66  NULLNULLNULL
> 7010040   66  NULLNULLNULL
> 7010040   66  NULLNULLNULL
> 7010040   66  NULLNULLNULL
> 8010040   88  NULLNULLNULL
> 8010040   88  NULLNULLNULL
> 8010040   88  NULLNULLNULL
> 8010040   88  NUL

[jira] [Commented] (HIVE-3717) Hive won't compile with -Dhadoop.mr.rev=20S

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623048#comment-13623048
 ] 

Hudson commented on HIVE-3717:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-3717 : Hive wont compile with -Dhadoop.mr.rev=20S (Gunther Hagleitner 
via Ashutosh Chauhan) (Revision 1455560)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1455560
Files : 
* /hive/trunk/build.properties
* /hive/trunk/ql/src/test/queries/clientpositive/archive_excludeHadoop20.q
* /hive/trunk/ql/src/test/queries/clientpositive/auto_join14.q
* /hive/trunk/ql/src/test/queries/clientpositive/auto_join14_hadoop20.q
* /hive/trunk/ql/src/test/queries/clientpositive/ctas.q
* /hive/trunk/ql/src/test/queries/clientpositive/ctas_hadoop20.q
* /hive/trunk/ql/src/test/queries/clientpositive/input12.q
* /hive/trunk/ql/src/test/queries/clientpositive/input12_hadoop20.q
* /hive/trunk/ql/src/test/queries/clientpositive/join14.q
* /hive/trunk/ql/src/test/queries/clientpositive/join14_hadoop20.q
* /hive/trunk/ql/src/test/results/clientpositive/auto_join14.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join14_hadoop20.q.out
* /hive/trunk/ql/src/test/results/clientpositive/ctas_hadoop20.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input12.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input12_hadoop20.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join14.q.out
* /hive/trunk/ql/src/test/results/clientpositive/join14_hadoop20.q.out


> Hive won't compile with -Dhadoop.mr.rev=20S
> ---
>
> Key: HIVE-3717
> URL: https://issues.apache.org/jira/browse/HIVE-3717
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure, Shims
>Affects Versions: 0.10.0
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Fix For: 0.11.0
>
> Attachments: HIVE-3717.1-1410543.txt, HIVE-3717.2.patch, 
> HIVE-3717.3.patch, HIVE-3717.4.patch, HIVE-3717.5.patch
>
>
> ant -Dhadoop.mr.rev=20S clean package
> fails with: 
> {noformat}
> compile:
>  [echo] Project: ql
> [javac] Compiling 744 source files to /root/hive/build/ql/classes
> [javac] 
> /root/hive/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFJson.java:67: cannot 
> find symbol
> [javac] symbol  : variable ALLOW_UNQUOTED_CONTROL_CHARS
> [javac] location: class org.codehaus.jackson.JsonParser.Feature
> [javac] JSON_FACTORY.enable(Feature.ALLOW_UNQUOTED_CONTROL_CHARS);
> [javac]^
> [javac] 
> /root/hive/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFJson.java:158: cannot 
> find symbol
> [javac] symbol  : method writeValueAsString(java.lang.Object)
> [javac] location: class org.codehaus.jackson.map.ObjectMapper
> [javac] result.set(MAPPER.writeValueAsString(extractObject));
> [javac]  ^
> [javac] 
> /root/hive/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDTFJSONTuple.java:59:
>  cannot find symbol
> [javac] symbol  : variable ALLOW_UNQUOTED_CONTROL_CHARS
> [javac] location: class org.codehaus.jackson.JsonParser.Feature
> [javac] JSON_FACTORY.enable(Feature.ALLOW_UNQUOTED_CONTROL_CHARS);
> [javac]^
> [javac] 
> /root/hive/ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDTFJSONTuple.java:189:
>  cannot find symbol
> [javac] symbol  : method writeValueAsString(java.lang.Object)
> [javac] location: class org.codehaus.jackson.map.ObjectMapper
> [javac]   
> retCols[i].set(MAPPER.writeValueAsString(extractObject));
> [javac]^
> [javac] Note: Some input files use or override a deprecated API.
> [javac] Note: Recompile with -Xlint:deprecation for details.
> [javac] Note: Some input files use unchecked or unsafe operations.
> [javac] Note: Recompile with -Xlint:unchecked for details.
> [javac] 4 errors
> {noformat}
> According to https://issues.apache.org/jira/browse/HADOOP-7470 hadoop 1.x has 
> been upgraded to jackson 1.8.8 but the POM file still specifies jackson 1.0.1 
> which doesn't work for hive (doesn't have the ALLOW_UNQUOTED_CONTROL_CHARS).
> The POM for hadoop 2.0.0-alpha (-Dhadoop.mr.rev=23) has the right dependency, 
> hadoop 0.20.2 (-Dhadoop.mr.rev=20) doesn't depend on jackson.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4264) Move HCatalog trunk code from trunk/hcatalog/historical to trunk/hcatalog

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623046#comment-13623046
 ] 

Hudson commented on HIVE-4264:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4264 Moved hcatalog trunk code up to hive/trunk/hcatalog (Revision 
1462675)

 Result = FAILURE
gates : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1462675
Files : 
* /hive/trunk/hcatalog/bin
* /hive/trunk/hcatalog/build-support
* /hive/trunk/hcatalog/build-support/ant/build-common.xml
* /hive/trunk/hcatalog/build-support/ant/checkstyle.xml
* /hive/trunk/hcatalog/build-support/checkstyle/coding_style.xml
* /hive/trunk/hcatalog/build.properties
* /hive/trunk/hcatalog/build.xml
* /hive/trunk/hcatalog/conf
* /hive/trunk/hcatalog/core
* /hive/trunk/hcatalog/core/pom.xml
* /hive/trunk/hcatalog/hcatalog-pig-adapter
* /hive/trunk/hcatalog/hcatalog-pig-adapter/pom.xml
* /hive/trunk/hcatalog/historical/trunk/DISCLAIMER.txt
* /hive/trunk/hcatalog/historical/trunk/bin
* /hive/trunk/hcatalog/historical/trunk/build-support
* /hive/trunk/hcatalog/historical/trunk/build.properties
* /hive/trunk/hcatalog/historical/trunk/conf
* /hive/trunk/hcatalog/historical/trunk/core
* /hive/trunk/hcatalog/historical/trunk/hcatalog-pig-adapter
* /hive/trunk/hcatalog/historical/trunk/pom.xml
* /hive/trunk/hcatalog/historical/trunk/scripts
* /hive/trunk/hcatalog/historical/trunk/server-extensions
* /hive/trunk/hcatalog/historical/trunk/shims
* /hive/trunk/hcatalog/historical/trunk/src/docs
* /hive/trunk/hcatalog/historical/trunk/src/packages
* /hive/trunk/hcatalog/historical/trunk/src/test/e2e
* /hive/trunk/hcatalog/historical/trunk/storage-handlers
* /hive/trunk/hcatalog/historical/trunk/webhcat
* /hive/trunk/hcatalog/ivy.xml
* /hive/trunk/hcatalog/pom.xml
* /hive/trunk/hcatalog/scripts
* /hive/trunk/hcatalog/server-extensions
* /hive/trunk/hcatalog/server-extensions/pom.xml
* /hive/trunk/hcatalog/shims
* /hive/trunk/hcatalog/src/docs
* /hive/trunk/hcatalog/src/java/org/apache/hive/hcatalog/package-info.java
* /hive/trunk/hcatalog/src/packages
* /hive/trunk/hcatalog/src/test/e2e
* /hive/trunk/hcatalog/storage-handlers
* /hive/trunk/hcatalog/storage-handlers/hbase/build.xml
* /hive/trunk/hcatalog/storage-handlers/hbase/pom.xml
* /hive/trunk/hcatalog/webhcat
* /hive/trunk/hcatalog/webhcat/java-client/pom.xml
* /hive/trunk/hcatalog/webhcat/svr/pom.xml


> Move HCatalog trunk code from trunk/hcatalog/historical to trunk/hcatalog
> -
>
> Key: HIVE-4264
> URL: https://issues.apache.org/jira/browse/HIVE-4264
> Project: Hive
>  Issue Type: Sub-task
>  Components: HCatalog
>Affects Versions: 0.11.0
>Reporter: Alan Gates
>Assignee: Alan Gates
> Fix For: 0.11.0
>
> Attachments: HIVE-4264.patch
>
>
> The trunk HCatalog code needs to be moved into the right place.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4269) fix handling of binary type in hiveserver2, jdbc driver

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623044#comment-13623044
 ] 

Hudson commented on HIVE-4269:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4269 : fix handling of binary type in hiveserver2, jdbc driver (Thejas 
Nair via Ashutosh Chauhan) (Revision 1464037)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1464037
Files : 
* /hive/trunk/data/files/datatypes.txt
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveBaseResultSet.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/HiveResultSetMetaData.java
* /hive/trunk/jdbc/src/java/org/apache/hive/jdbc/Utils.java
* /hive/trunk/jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java
* 
/hive/trunk/service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java


> fix handling of binary type in hiveserver2, jdbc driver
> ---
>
> Key: HIVE-4269
> URL: https://issues.apache.org/jira/browse/HIVE-4269
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Fix For: 0.11.0
>
> Attachments: HIVE-4269.1.patch
>
>
> Need to use 'new String(byte[])' instead of 'byte[].toString()' when 
> converting to String for jdbc, in SQLOperation.convertLazyToJava()
> Need to add support for binary in jdbc driver code.
> THe exception that gets thrown while trying to access binary column is like 
> this - 
> {code}
> [junit] org.apache.hive.service.cli.HiveSQLException: 
> java.lang.ClassCastException: [B cannot be cast to java.lang.String
> [junit] at 
> org.apache.hive.service.cli.operation.SQLOperation.getNextRowSet(SQLOperation.java:188)
> [junit] at 
> org.apache.hive.service.cli.operation.OperationManager.getOperationNextRowSet(OperationManager.java:170)
> [junit] at 
> org.apache.hive.service.cli.session.HiveSessionImpl.fetchResults(HiveSessionImpl.java:324)
> [junit] at 
> org.apache.hive.service.cli.CLIService.fetchResults(CLIService.java:290)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4263) Adjust build.xml package command to move all hcat jars and binaries into build

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623045#comment-13623045
 ] 

Hudson commented on HIVE-4263:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4263 : Adjust build.xml package command to move all hcat jars and 
binaries into build (Alan Gates via Ashutosh Chauhan) (Revision 1462674)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1462674
Files : 
* /hive/trunk/build.xml


> Adjust build.xml package command to move all hcat jars and binaries into build
> --
>
> Key: HIVE-4263
> URL: https://issues.apache.org/jira/browse/HIVE-4263
> Project: Hive
>  Issue Type: Sub-task
>  Components: Build Infrastructure
>Affects Versions: 0.11.0
>Reporter: Alan Gates
>Assignee: Alan Gates
> Fix For: 0.11.0
>
> Attachments: HIVE-4263.patch
>
>
> HIVE-4198 created a line in build.xml so that as part of the package target   
> hive-hcatalog-${version}.jar is moved into the top level build directory.  
> However, hcatalog produces several jars and several scripts.  The package 
> target needs to be changed to move all of these up into build.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3958) support partial scan for analyze command - RCFile

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623043#comment-13623043
 ] 

Hudson commented on HIVE-3958:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-3958 support partial scan for analyze command - RCFile
(Gang Tim Liu via namit) (Revision 1461586)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1461586
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/StatsTask.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/TaskFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/rcfile/merge/RCFileKeyBufferWrapper.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/rcfile/stats
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/rcfile/stats/PartialScanMapper.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/rcfile/stats/PartialScanTask.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/rcfile/stats/PartialScanWork.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GenMRTableScan1.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ColumnStatsSemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/QBParseInfo.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/StatsWork.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/stats/jdbc/JDBCStatsAggregator.java
* /hive/trunk/ql/src/test/queries/clientnegative/stats_partialscan_autogether.q
* 
/hive/trunk/ql/src/test/queries/clientnegative/stats_partialscan_non_external.q
* /hive/trunk/ql/src/test/queries/clientnegative/stats_partialscan_non_native.q
* /hive/trunk/ql/src/test/queries/clientnegative/stats_partscan_norcfile.q
* /hive/trunk/ql/src/test/queries/clientpositive/stats_partscan_1.q
* 
/hive/trunk/ql/src/test/results/clientnegative/stats_partialscan_autogether.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/stats_partialscan_non_external.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/stats_partialscan_non_native.q.out
* /hive/trunk/ql/src/test/results/clientnegative/stats_partscan_norcfile.q.out
* /hive/trunk/ql/src/test/results/clientpositive/stats_partscan_1.q.out


> support partial scan for analyze command - RCFile
> -
>
> Key: HIVE-3958
> URL: https://issues.apache.org/jira/browse/HIVE-3958
> Project: Hive
>  Issue Type: Improvement
>Reporter: Gang Tim Liu
>Assignee: Gang Tim Liu
> Fix For: 0.11.0
>
> Attachments: HIVE-3958.patch.1, HIVE-3958.patch.2, HIVE-3958.patch.3, 
> HIVE-3958.patch.4, HIVE-3958.patch.5, HIVE-3958.patch.6
>
>
> analyze commands allows us to collect statistics on existing 
> tables/partitions. It works great but might be slow since it scans all files.
> There are 2 ways to speed it up:
> 1. collect stats without file scan. It may not collect all stats but good and 
> fast enough for use case. HIVE-3917 addresses it
> 2. collect stats via partial file scan. It doesn't scan all content of files 
> but part of it to get file metadata. some examples are 
> https://cwiki.apache.org/Hive/rcfilecat.html for RCFile, ORC ( HIVE-3874 ) 
> and HFile of Hbase
> This jira is targeted to address the #2. More specifically RCFile format.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4007) Create abstract classes for serializer and deserializer

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623041#comment-13623041
 ] 

Hudson commented on HIVE-4007:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4007 : Create abstract classes for serializer and deserializer (Namit 
Jain via Ashutosh Chauhan) (Revision 1461235)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1461235
Files : 
* 
/hive/trunk/contrib/src/java/org/apache/hadoop/hive/contrib/serde2/RegexSerDe.java
* 
/hive/trunk/contrib/src/java/org/apache/hadoop/hive/contrib/serde2/TypedBytesSerDe.java
* 
/hive/trunk/contrib/src/java/org/apache/hadoop/hive/contrib/serde2/s3/S3LogDeserializer.java
* 
/hive/trunk/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDe.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/serde2/TestSerDe.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/AbstractDeserializer.java
* /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/AbstractSerDe.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/AbstractSerializer.java
* /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/Deserializer.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/MetadataTypedColumnsetSerDe.java
* /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/NullStructSerDe.java
* /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/RegexSerDe.java
* /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/SerDe.java
* /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/Serializer.java
* /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/TypedSerDe.java
* /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroSerDe.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/binarysortable/BinarySortableSerDe.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/columnar/ColumnarSerDeBase.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/dynamic_type/DynamicSerDe.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazySimpleSerDe.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinarySerDe.java
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/thrift/ThriftDeserializer.java


> Create abstract classes for serializer and deserializer
> ---
>
> Key: HIVE-4007
> URL: https://issues.apache.org/jira/browse/HIVE-4007
> Project: Hive
>  Issue Type: Improvement
>  Components: Serializers/Deserializers
>Reporter: Namit Jain
>Assignee: Namit Jain
> Fix For: 0.11.0
>
> Attachments: hive.4007.1.patch, hive.4007.2.patch, hive.4007.3.patch, 
> hive.4007.4.patch
>
>
> Currently, it is very difficult to change the Serializer/Deserializer
> interface, since all the SerDes directly implement the interface.
> Instead, we should have abstract classes for implementing these interfaces.
> In case of a interface change, only the abstract class and the relevant 
> serde needs to change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4127) Testing with Hadoop 2.x causes test failure for ORC's TestFileDump

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623042#comment-13623042
 ] 

Hudson commented on HIVE-4127:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4127 : Testing with Hadoop 2.x causes test failure for ORC 
TestFileDump (Owen Omalley via Ashutosh Chauhan) (Revision 1454736)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1454736
Files : 
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestFileDump.java
* /hive/trunk/ql/src/test/resources/orc-file-dump.out


> Testing with Hadoop 2.x causes test failure for ORC's TestFileDump
> --
>
> Key: HIVE-4127
> URL: https://issues.apache.org/jira/browse/HIVE-4127
> Project: Hive
>  Issue Type: New Feature
>  Components: Serializers/Deserializers
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Fix For: 0.11.0
>
> Attachments: HIVE-4127.D9111.1.patch
>
>
> Hadoop 2's junit is a newer version, which causes differences in behaviors of 
> the TestFileDump. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4125) Expose metastore JMX metrics

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623040#comment-13623040
 ] 

Hudson commented on HIVE-4125:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4125. Expose metastore JMX metrics. (Samuel Yuan via kevinwilfong) 
(Revision 1455668)

 Result = FAILURE
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1455668
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/common/metrics/Metrics.java


> Expose metastore JMX metrics
> 
>
> Key: HIVE-4125
> URL: https://issues.apache.org/jira/browse/HIVE-4125
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 0.11.0
>Reporter: Samuel Yuan
>Assignee: Samuel Yuan
>Priority: Trivial
> Attachments: HIVE-4125.HIVE-4125.HIVE-4125.D9123.1.patch, 
> HIVE-4125.HIVE-4125.HIVE-4125.D9123.2.patch
>
>
> Add a safe way to access the metrics stored for each MetricsScope, so that 
> they can be used outside of JMX.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4122) Queries fail if timestamp data not in expected format

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623039#comment-13623039
 ] 

Hudson commented on HIVE-4122:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4122 : Queries fail if timestamp data not in expected format (Prasad 
Mujumdar via Ashutosh Chauhan) (Revision 1462874)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1462874
Files : 
* /hive/trunk/ql/src/test/queries/clientpositive/timestamp_null.q
* /hive/trunk/ql/src/test/results/clientpositive/timestamp_null.q.out
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyTimestamp.java


> Queries fail if timestamp data not in expected format
> -
>
> Key: HIVE-4122
> URL: https://issues.apache.org/jira/browse/HIVE-4122
> Project: Hive
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 0.10.0
>Reporter: Lenni Kuff
>Assignee: Prasad Mujumdar
> Fix For: 0.11.0
>
> Attachments: HIVE-4122-1.patch, HIVE-4188-2.patch
>
>
> Queries will fail if timestamp data not in expected format. The expected 
> behavior is to return NULL for these invalid values.
> {code}
> # Not all timestamps in correct format:
> echo "1999-10-10
> 1999-10-10 90:10:10
> -01-01 00:00:00" > table.data
> hive -e "create table timestamp_tbl (t timestamp)"
> hadoop fs -put ./table.data HIVE_WAREHOUSE_DIR/timestamp_tbl/
> hive -e "select t from timestamp_tbl"
> Execution failed with exit status: 2
> 13/03/05 09:47:05 ERROR exec.Task: Execution failed with exit status: 2
> Obtaining error information
> 13/03/05 09:47:05 ERROR exec.Task: Obtaining error information
> Task failed!
> Task ID:
>   Stage-1
> Logs:
> 13/03/05 09:47:05 ERROR exec.Task: 
> Task failed!
> Task ID:
>   Stage-1
> Logs:
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4260) union_remove_12, union_remove_13 are failing on hadoop2

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623038#comment-13623038
 ] 

Hudson commented on HIVE-4260:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4260 union_remove_12, union_remove_13 are failing on hadoop2
(Gunther Hagleitner via namit) (Revision 1463479)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1463479
Files : 
* /hive/trunk/ql/src/test/queries/clientpositive/union_remove_12.q
* /hive/trunk/ql/src/test/queries/clientpositive/union_remove_13.q
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_12.q.out
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_13.q.out


> union_remove_12, union_remove_13 are failing on hadoop2
> ---
>
> Key: HIVE-4260
> URL: https://issues.apache.org/jira/browse/HIVE-4260
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
>Priority: Critical
> Fix For: 0.11.0
>
> Attachments: HIVE-4260.1.patch
>
>
> Problem goes away if hive.mapjoin.hint is set to true. Need to investivate 
> why they are failing without it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3849) Aliased column in where clause for multi-groupby single reducer cannot be resolved

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623037#comment-13623037
 ] 

Hudson commented on HIVE-3849:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-3849 Aliased column in where clause for multi-groupby single reducer 
cannot
be resolved (Navis via namit) (Revision 1451259)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1451259
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java
* 
/hive/trunk/ql/src/test/queries/clientpositive/groupby_multi_insert_common_distinct.q
* /hive/trunk/ql/src/test/queries/clientpositive/groupby_multi_single_reducer3.q
* 
/hive/trunk/ql/src/test/queries/clientpositive/groupby_mutli_insert_common_distinct.q
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby_multi_insert_common_distinct.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby_multi_single_reducer3.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/groupby_mutli_insert_common_distinct.q.out


> Aliased column in where clause for multi-groupby single reducer cannot be 
> resolved
> --
>
> Key: HIVE-3849
> URL: https://issues.apache.org/jira/browse/HIVE-3849
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Minor
> Fix For: 0.11.0
>
> Attachments: HIVE-3849.D7713.1.patch, HIVE-3849.D7713.2.patch, 
> HIVE-3849.D7713.3.patch, HIVE-3849.D7713.4.patch, HIVE-3849.D7713.5.patch, 
> HIVE-3849.D7713.6.patch, HIVE-3849.D7713.7.patch, HIVE-3849.D7713.8.patch
>
>
> Verifying HIVE-3847, I've found an exception is thrown before meeting the 
> error situation described in it. Something like, 
> FAILED: SemanticException [Error 10025]: Line 40:6 Expression not in GROUP BY 
> key 'crit5'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-948) more query plan optimization rules

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623036#comment-13623036
 ] 

Hudson commented on HIVE-948:
-

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-948: more query plan optimization rules (Navis via Ashutosh Chauhan) 
(Revision 1449981)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1449981
Files : 
* /hive/trunk/contrib/src/test/results/clientpositive/serde_typedbytes.q.out
* /hive/trunk/contrib/src/test/results/clientpositive/serde_typedbytes5.q.out
* /hive/trunk/hbase-handler/src/test/results/positive/hbase_queries.q.out
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/NonBlockingOpDeDupProc.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/Optimizer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeDescUtils.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ppd/OpProcFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ppd/PredicateTransitivePropagate.java
* /hive/trunk/ql/src/test/queries/clientpositive/nonblock_op_deduplicate.q
* /hive/trunk/ql/src/test/results/clientnegative/bucket_mapjoin_mismatch1.q.out
* 
/hive/trunk/ql/src/test/results/clientnegative/sortmerge_mapjoin_mismatch_1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alias_casted_column.q.out
* /hive/trunk/ql/src/test/results/clientpositive/ambiguous_col.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join12.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join14_hadoop20.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join17.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join19.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join20.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join22.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join26.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join28.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join29.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join5.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join6.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join7.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join8.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_join9.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_smb_mapjoin_14.q.out
* /hive/trunk/ql/src/test/results/clientpositive/auto_sortmerge_join_9.q.out
* /hive/trunk/ql/src/test/results/clientpositive/binarysortable_1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucket_groupby.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucket_map_join_1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucket_map_join_2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketcontext_1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketcontext_2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketcontext_3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketcontext_4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketcontext_5.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketcontext_6.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketcontext_7.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketcontext_8.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketizedhiveinputformat.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketmapjoin1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketmapjoin10.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketmapjoin11.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketmapjoin12.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketmapjoin13.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketmapjoin2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketmapjoin3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketmapjoin4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketmapjoin5.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketmapjoin7.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketmapjoin8.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketmapjoin9.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketmapjoin_negative.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucketmapjoin_negative2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/bucket

[jira] [Commented] (HIVE-4281) add hive.map.groupby.sorted.testmode

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623035#comment-13623035
 ] 

Hudson commented on HIVE-4281:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4281 add hive.map.groupby.sorted.testmode
(Namit via Gang Tim Liu) (Revision 1464277)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1464277
Files : 
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/QueryProperties.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GroupByOptimizer.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/ParseContext.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/test/queries/clientpositive/groupby_sort_test_1.q
* /hive/trunk/ql/src/test/results/clientpositive/groupby_sort_test_1.q.out


> add hive.map.groupby.sorted.testmode
> 
>
> Key: HIVE-4281
> URL: https://issues.apache.org/jira/browse/HIVE-4281
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Namit Jain
>Assignee: Namit Jain
> Fix For: 0.11.0
>
> Attachments: hive.4281.1.patch, hive.4281.2.patch, 
> hive.4281.2.patch-nohcat, hive.4281.3.patch
>
>
> The idea behind this would be to test hive.map.groupby.sorted.
> Since this is a new feature, it might be a good idea to run it in test mode,
> where a query property would denote that this query plan would have changed.
> If a customer wants, they can run those queries offline, compare the results
> for correctness, and set hive.map.groupby.sorted only if all the results are
> the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4186) NPE in ReduceSinkDeDuplication

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623034#comment-13623034
 ] 

Hudson commented on HIVE-4186:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4186 : NPE in ReduceSinkDeDuplication (Harish Butani via Ashutosh 
Chauhan) (Revision 1458524)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1458524
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ReduceSinkDeDuplication.java
* /hive/trunk/ql/src/test/queries/clientpositive/reducesink_dedup.q
* /hive/trunk/ql/src/test/results/clientpositive/reducesink_dedup.q.out


> NPE in ReduceSinkDeDuplication
> --
>
> Key: HIVE-4186
> URL: https://issues.apache.org/jira/browse/HIVE-4186
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Harish Butani
>Assignee: Harish Butani
> Fix For: 0.11.0
>
> Attachments: HIVE-4186.1.patch.txt, HIVE-4186.2.patch.txt, 
> HIVE-4186.3.patch.txt
>
>
> When you have a sequence of RedueSinks on constants you get this error:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.optimizer.ReduceSinkDeDuplication$ReduceSinkDeduplicateProcFactory$ReducerReducerProc.getPartitionAndKeyColumnMapping(ReduceSinkDeDuplication.java:416)
> {noformat}
> The e.g. to generate this si:
> {noformat}
> select p_name from (select p_name from part distribute by 1 sort by 1) p 
> distribute by 1 sort by 1
> {noformat}
> Sorry for the contrived e.g., but this actually happens when we stack 
> windowing clauses (see PTF-Windowing branch)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4187) QL build-grammar target fails after HIVE-4148

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623033#comment-13623033
 ] 

Hudson commented on HIVE-4187:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4187. QL build-grammar target fails after HIVE-4148 (Gunther 
Hagleitner via cws) (Revision 1459014)

 Result = FAILURE
cws : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1459014
Files : 
* /hive/trunk/ivy/libraries.properties
* /hive/trunk/metastore/ivy.xml


> QL build-grammar target fails after HIVE-4148
> -
>
> Key: HIVE-4187
> URL: https://issues.apache.org/jira/browse/HIVE-4187
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Reporter: Carl Steinbach
>Assignee: Gunther Hagleitner
>Priority: Critical
> Fix For: 0.11.0
>
> Attachments: HIVE-4187.1.patch, HIVE-4187.2.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3951) Allow Decimal type columns in Regex Serde

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623031#comment-13623031
 ] 

Hudson commented on HIVE-3951:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-3951 : Allow Decimal type columns in Regex Serde (Mark Grover via 
Ashutosh Chauhan) (Revision 1463380)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1463380
Files : 
* /hive/trunk/ql/src/test/queries/clientpositive/serde_regex.q
* /hive/trunk/ql/src/test/results/clientpositive/serde_regex.q.out
* /hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/RegexSerDe.java


> Allow Decimal type columns in Regex Serde
> -
>
> Key: HIVE-3951
> URL: https://issues.apache.org/jira/browse/HIVE-3951
> Project: Hive
>  Issue Type: New Feature
>  Components: Serializers/Deserializers
>Affects Versions: 0.10.0
>Reporter: Mark Grover
>Assignee: Mark Grover
> Fix For: 0.11.0
>
> Attachments: HIVE-3951.1.patch, HIVE-3951.2.patch
>
>
> Decimal type in Hive was recently added by HIVE-2693. We should allow users 
> to create tables with decimal type columns when using Regex Serde. 
> HIVE-3004 did something similar for other primitive types.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4188) TestJdbcDriver2.testDescribeTable failing consistently

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623032#comment-13623032
 ] 

Hudson commented on HIVE-4188:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4188. TestJdbcDriver2.testDescribeTable failing consistently. (Prasad 
Mujumdar via kevinwilfong) (Revision 1459401)

 Result = FAILURE
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1459401
Files : 
* /hive/trunk/eclipse-templates/.classpath
* /hive/trunk/jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java


> TestJdbcDriver2.testDescribeTable failing consistently
> --
>
> Key: HIVE-4188
> URL: https://issues.apache.org/jira/browse/HIVE-4188
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, Tests
>Affects Versions: 0.11.0
>Reporter: Kevin Wilfong
>Assignee: Prasad Mujumdar
> Fix For: 0.11.0
>
> Attachments: HIVE-4188-1.patch, HIVE-4188-2.patch
>
>
> Running in Linux on a clean checkout after running ant very-clean package, 
> the test TestJdbcDriver2.testDescribeTable fails consistently with 
> Column name 'under_col' not found expected: but was:<# col_name >
> junit.framework.ComparisonFailure: Column name 'under_col' not found 
> expected: but was:<# col_name >
> at junit.framework.Assert.assertEquals(Assert.java:81)
> at 
> org.apache.hive.jdbc.TestJdbcDriver2.testDescribeTable(TestJdbcDriver2.java:815)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at junit.framework.TestCase.runTest(TestCase.java:154)
> at junit.framework.TestCase.runBare(TestCase.java:127)
> at junit.framework.TestResult$1.protect(TestResult.java:106)
> at junit.framework.TestResult.runProtected(TestResult.java:124)
> at junit.framework.TestResult.run(TestResult.java:109)
> at junit.framework.TestCase.run(TestCase.java:118)
> at junit.framework.TestSuite.runTest(TestSuite.java:208)
> at junit.framework.TestSuite.run(TestSuite.java:203)
> at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
> at junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:39)
> at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:422)
> at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:931)
> at 
> org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:785)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3428) Fix log4j configuration errors when running hive on hadoop23

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623029#comment-13623029
 ] 

Hudson commented on HIVE-3428:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-3428 : Fix log4j configuration errors when running hive on hadoop23 
(Gunther Hagleitner via Ashutosh Chauhan) (Revision 1450645)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1450645
Files : 
* /hive/trunk/common/src/java/conf/hive-log4j.properties
* /hive/trunk/data/conf/hive-log4j.properties
* /hive/trunk/pdk/scripts/conf/log4j.properties
* /hive/trunk/ql/src/java/conf/hive-exec-log4j.properties
* /hive/trunk/shims/ivy.xml
* 
/hive/trunk/shims/src/common/java/org/apache/hadoop/hive/shims/HiveEventCounter.java
* /hive/trunk/shims/src/common/java/org/apache/hadoop/hive/shims/ShimLoader.java


> Fix log4j configuration errors when running hive on hadoop23
> 
>
> Key: HIVE-3428
> URL: https://issues.apache.org/jira/browse/HIVE-3428
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.10.0
>Reporter: Zhenxiao Luo
>Assignee: Gunther Hagleitner
> Fix For: 0.11.0
>
> Attachments: HIVE-3428.1.D8805.patch, HIVE-3428.1.patch.txt, 
> HIVE-3428.2.patch.txt, HIVE-3428.3.patch.txt, HIVE-3428.4.patch.txt, 
> HIVE-3428.5.patch.txt, HIVE-3428.6.patch.txt, 
> HIVE-3428_SHIM_EVENT_COUNTER.patch
>
>
> There are log4j configuration errors when running hive on hadoop23, some of 
> them may fail testcases, since the following log4j error message could 
> printed to console, or to output file, which diffs from the expected output:
> [junit] < log4j:ERROR Could not find value for key log4j.appender.NullAppender
> [junit] < log4j:ERROR Could not instantiate appender named "NullAppender".
> [junit] < 12/09/04 11:34:42 WARN conf.HiveConf: hive-site.xml not found on 
> CLASSPATH

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4288) Add IntelliJ project files files to .gitignore

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623030#comment-13623030
 ] 

Hudson commented on HIVE-4288:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4288 Add IntelliJ project files files to .gitignore (Roshan Naik via 
Navis) (Revision 1463827)

 Result = FAILURE
navis : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1463827
Files : 
* /hive/trunk/.gitignore


> Add IntelliJ project files files to .gitignore
> --
>
> Key: HIVE-4288
> URL: https://issues.apache.org/jira/browse/HIVE-4288
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.10.0
>Reporter: Roshan Naik
>Assignee: Roshan Naik
>Priority: Minor
> Attachments: 4288.patch
>
>
> Add *.iml files & .idea dir to .gitignore 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4289) HCatalog build fails when behind a firewall

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623028#comment-13623028
 ] 

Hudson commented on HIVE-4289:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4289 HCatalog build fails when behind a firewall
(Samuel Yuan via namit) (Revision 1464292)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1464292
Files : 
* /hive/trunk/hcatalog/build-support/ant/deploy.xml


> HCatalog build fails when behind a firewall
> ---
>
> Key: HIVE-4289
> URL: https://issues.apache.org/jira/browse/HIVE-4289
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure, HCatalog
>Affects Versions: 0.11.0
>Reporter: Samuel Yuan
>Assignee: Samuel Yuan
> Fix For: 0.11.0
>
> Attachments: HIVE-4289.HIVE-4289.HIVE-4289.HIVE-4289.D9921.1.patch
>
>
> A bug in Maven makes it impossible to set a proxy for a Maven Ant POM task 
> (see https://jira.codehaus.org/browse/MANTTASKS-216). Building behind a 
> firewall results in the following error:
> [artifact:pom] Downloading: org/apache/apache/11/apache-11.pom from 
> repository central at http://repo1.maven.org/maven2
> [artifact:pom] Transferring 14K from central
> [artifact:pom] [WARNING] Unable to get resource 'org.apache:apache:pom:11' 
> from repository central (http://repo1.maven.org/maven2): Error transferring 
> file: No route to host
> [artifact:pom] An error has occurred while processing the Maven artifact 
> tasks.
> [artifact:pom]  Diagnosis:
> [artifact:pom]
> [artifact:pom] Unable to initialize POM pom.xml: Cannot find parent: 
> org.apache:apache for project: 
> org.apache.hcatalog:hcatalog:pom:0.11.0-SNAPSHOT for project 
> org.apache.hcatalog:hcatalog:pom:0.11.0-SNAPSHOT
> [artifact:pom] Unable to download the artifact from any repository
> Despite the error message, Ant/Maven is actually able to retrieve the POM 
> file by using the proxy set for Ant. However, it mysteriously fails when 
> trying to retrieve the checksum, which causes the entire operation to fail. 
> Regardless, a proxy should be set through Maven's settings.xml file. Since 
> this is not possible, the only way to build HCat behind a firewall right now 
> is to manually fetch the POM file and have Maven read it from the cache.
> Ideally we would fix this in Maven, but given that this issue has been 
> reported for a long time in a number of separate places I think it is more 
> practical to modify the HCatalog build to specify the POM as a dependency, 
> fetching it into the cache so that the artifact:pom task can succeed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4235) CREATE TABLE IF NOT EXISTS uses inefficient way to check if table exists

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623027#comment-13623027
 ] 

Hudson commented on HIVE-4235:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4235. CREATE TABLE IF NOT EXISTS uses inefficient way to check if 
table exists. (Gang Tim Liu via kevinwilfong) (Revision 1462373)

 Result = FAILURE
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1462373
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java


> CREATE TABLE IF NOT EXISTS uses inefficient way to check if table exists
> 
>
> Key: HIVE-4235
> URL: https://issues.apache.org/jira/browse/HIVE-4235
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC, Query Processor, SQL
>Reporter: Gang Tim Liu
>Assignee: Gang Tim Liu
> Fix For: 0.11.0
>
> Attachments: HIVE-4235.patch.1
>
>
> CREATE TABLE IF NOT EXISTS uses inefficient way to check if table exists.
> It uses Hive.java's getTablesByPattern(...) to check if table exists. It 
> involves regular expression and eventually database join. Very efficient. It 
> can cause database lock time increase and hurt db performance if a lot of 
> such commands hit database.
> The suggested approach is to use getTable(...) since we know tablename already

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4079) Altering a view partition fails with NPE

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623026#comment-13623026
 ] 

Hudson commented on HIVE-4079:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4079 Altering a view partition fails with NPE
(Kevin Wilfong via namit) (Revision 1451173)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1451173
Files : 
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java
* 
/hive/trunk/metastore/src/test/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java


> Altering a view partition fails with NPE
> 
>
> Key: HIVE-4079
> URL: https://issues.apache.org/jira/browse/HIVE-4079
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.11.0
>Reporter: Kevin Wilfong
>Assignee: Kevin Wilfong
> Fix For: 0.11.0
>
> Attachments: HIVE-4079.1.patch.txt
>
>
> Altering a view partition e.g. to add partition parameters, fails with a null 
> pointer exception in the ObjectStore class.
> Currently, this is only possible using the metastore Thrift API and there are 
> no testcases for it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4077) alterPartition and alterPartitions methods in ObjectStore swallow exceptions

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623025#comment-13623025
 ] 

Hudson commented on HIVE-4077:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4077 alterPartition and alterPartitions methods in ObjectStore swallow 
exceptions
(Kevin Wilfong via namit) (Revision 1451476)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1451476
Files : 
* 
/hive/trunk/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java


> alterPartition and alterPartitions methods in ObjectStore swallow exceptions
> 
>
> Key: HIVE-4077
> URL: https://issues.apache.org/jira/browse/HIVE-4077
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.11.0
>Reporter: Kevin Wilfong
>Assignee: Kevin Wilfong
> Fix For: 0.11.0
>
> Attachments: HIVE-4077.1.patch.txt, HIVE-4077.2.patch.txt, 
> HIVE-4077.3.patch.txt
>
>
> The alterPartition and alterPartitions methods in the ObjectStore class throw 
> a MetaException in the case of a failure but do not include the cause, 
> meaning that information is lost.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3904) Replace hashmaps in JoinOperators to array

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623024#comment-13623024
 ] 

Hudson commented on HIVE-3904:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-3904 Replace hashmaps in JoinOperators to array
(Navis via namit) (Revision 1451260)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1451260
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/AbstractMapJoinOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/CommonJoinOperator.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/HashTableSinkOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/JoinOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/JoinUtil.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/MapJoinOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/SMBMapJoinOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/SkewJoinHandler.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/JoinDesc.java


> Replace hashmaps in JoinOperators to array
> --
>
> Key: HIVE-3904
> URL: https://issues.apache.org/jira/browse/HIVE-3904
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
>Priority: Trivial
> Fix For: 0.11.0
>
> Attachments: HIVE-3904.D7959.1.patch, HIVE-3904.D7959.2.patch
>
>
> Join operator has many HashMaps that maps tag to some internal 
> value(ExprEvals, OIs, etc.) and theses are accessed 5 or more times per an 
> object, which is seemed unnecessary overhead.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3850) hour() function returns 12 hour clock value when using timestamp datatype

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3850?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623023#comment-13623023
 ] 

Hudson commented on HIVE-3850:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-3850 : hour() function returns 12 hour clock value when using 
timestamp datatype (Anandha and Franklin via Ashutosh Chauhan) (Revision 
1462988)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1462988
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFHour.java
* /hive/trunk/ql/src/test/queries/clientpositive/udf_hour.q
* /hive/trunk/ql/src/test/results/clientpositive/udf_hour.q.out


> hour() function returns 12 hour clock value when using timestamp datatype
> -
>
> Key: HIVE-3850
> URL: https://issues.apache.org/jira/browse/HIVE-3850
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Affects Versions: 0.9.0, 0.10.0
>Reporter: Pieterjan Vriends
> Fix For: 0.11.0
>
> Attachments: hive-3850_1.patch, HIVE-3850.patch.txt
>
>
> Apparently UDFHour.java does have two evaluate() functions. One that does 
> accept a Text object as parameter and one that does use a TimeStampWritable 
> object as parameter. The first function does return the value of 
> Calendar.HOUR_OF_DAY and the second one of Calendar.HOUR. In the 
> documentation I couldn't find any information on the overload of the 
> evaluation function. I did spent quite some time finding out why my statement 
> didn't return a 24 hour clock value.
> Shouldn't both functions return the same?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4131) Fix eclipse template classpath to include new packages added by ORC file patch

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623022#comment-13623022
 ] 

Hudson commented on HIVE-4131:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4131. Fix eclipse template classpath to include new packages added by 
ORC file patch. (Prasad Mujumdar via kevinwilfong) (Revision 1454496)

 Result = FAILURE
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1454496
Files : 
* /hive/trunk/eclipse-templates/.classpath


> Fix eclipse template classpath to include new packages added by ORC file patch
> --
>
> Key: HIVE-4131
> URL: https://issues.apache.org/jira/browse/HIVE-4131
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Affects Versions: 0.11.0
>Reporter: Prasad Mujumdar
>Assignee: Prasad Mujumdar
> Fix For: 0.11.0
>
> Attachments: HIVE-4131-1.patch
>
>
> The ORC file feature (HIVE-3874) has added protobuf and snappy libraries, 
> also generated protobuf code. All these needs to be included in the eclipse 
> classpath template. The eclipse projected generated on latest trunk has build 
> errors due to the missing jar/classes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3862) testHBaseNegativeCliDriver_cascade_dbdrop fails on hadoop-1

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623021#comment-13623021
 ] 

Hudson commented on HIVE-3862:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-3862 : testHBaseNegativeCliDriver_cascade_dbdrop fails on hadoop-1 
(Gunther Hagleitner via Ashutosh Chauhan) (Revision 1455405)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1455405
Files : 
* /hive/trunk/hbase-handler/src/test/queries/negative/cascade_dbdrop.q
* /hive/trunk/hbase-handler/src/test/queries/negative/cascade_dbdrop_hadoop20.q
* /hive/trunk/hbase-handler/src/test/results/negative/cascade_dbdrop.q.out
* 
/hive/trunk/hbase-handler/src/test/results/negative/cascade_dbdrop_hadoop20.q.out


> testHBaseNegativeCliDriver_cascade_dbdrop fails on hadoop-1
> ---
>
> Key: HIVE-3862
> URL: https://issues.apache.org/jira/browse/HIVE-3862
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.10.0
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Fix For: 0.11.0
>
> Attachments: HIVE-3862.1.patch, HIVE-3862.patch
>
>
> Actually functionality is working correctly, but incorrect include/exclude 
> macro make cause the wrong query file to be run.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4075) TypeInfoFactory is not thread safe and is access by multiple threads

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623020#comment-13623020
 ] 

Hudson commented on HIVE-4075:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4075 : TypeInfoFactory is not thread safe and is access by multiple 
threads (Brock Noland via Ashutosh Chauhan) (Revision 1451280)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1451280
Files : 
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/typeinfo/TypeInfoFactory.java


> TypeInfoFactory is not thread safe and is access by multiple threads
> 
>
> Key: HIVE-4075
> URL: https://issues.apache.org/jira/browse/HIVE-4075
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 0.10.0
>Reporter: Brock Noland
>Assignee: Brock Noland
> Fix For: 0.11.0
>
> Attachments: HIVE-4075-0.patch
>
>
> TypeInfoFactory is not thread safe and when accessed by multiple threads 
> calls to any of the methods can modify hashmaps concurrently resulting in 
> infinite loops.
> {noformat}
> "pool-1-thread-240" prio=10 tid=0x2aabd8bf7000 nid=0x5f4a runnable 
> [0x44626000] 
> java.lang.Thread.State: RUNNABLE 
> at java.util.HashMap.get(HashMap.java:303) 
> at 
> org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory.getStructTypeInfo(TypeInfoFactory.java:94)
>  
> at 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.initSerdeParams(LazySimpleSerDe.java:237)
>  
> at 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe.initialize(LazySimpleSerDe.java:182)
>  
> at 
> org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:203)
>  
> at 
> org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:260)
>  
> at org.apache.hadoop.hive.ql.metadata.Table.checkValidity(Table.java:167) 
> at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:930) 
> at 
> org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeDropTable(DDLSemanticAnalyzer.java:706)
>  
> at 
> org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:212)
>  
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:246)
>  
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:432) 
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:337) 
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:906) 
> - locked <0x2aaac6e1c270> (a java.lang.Object) 
> at 
> org.apache.hive.service.cli.operation.SQLOperation.run(SQLOperation.java:94)
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4139) MiniDFS shim does not work for hadoop 2

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623017#comment-13623017
 ] 

Hudson commented on HIVE-4139:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4139 : MiniDFS shim does not work for hadoop 2 (Gunther Hagleitner via 
Ashutosh Chauhan) (Revision 1459072)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1459072
Files : 
* /hive/trunk/build-common.xml
* /hive/trunk/build.properties
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/QTestUtil.java
* /hive/trunk/shims/ivy.xml
* 
/hive/trunk/shims/src/0.20/java/org/apache/hadoop/hive/shims/Hadoop20Shims.java
* 
/hive/trunk/shims/src/0.20S/java/org/apache/hadoop/hive/shims/Hadoop20SShims.java
* 
/hive/trunk/shims/src/0.23/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java
* 
/hive/trunk/shims/src/common-secure/java/org/apache/hadoop/hive/shims/HadoopShimsSecure.java
* 
/hive/trunk/shims/src/common/java/org/apache/hadoop/hive/shims/HadoopShims.java


> MiniDFS shim does not work for hadoop 2
> ---
>
> Key: HIVE-4139
> URL: https://issues.apache.org/jira/browse/HIVE-4139
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Fix For: 0.11.0
>
> Attachments: HIVE-4139.1.patch, HIVE-4139.2.patch, HIVE-4139.3.patch, 
> HIVE-4139.4.patch
>
>
> There's an incompatibility between hadoop 1 & 2 wrt to the MiniDfsCluster 
> class. That causes the hadoop 2 line Minimr tests to fail with a 
> "MethodNotFound" exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4138) ORC's union object inspector returns a type name that isn't parseable by TypeInfoUtils

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623019#comment-13623019
 ] 

Hudson commented on HIVE-4138:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4138 : ORC's union object inspector returns a type name that isn't 
parseable by TypeInfoUtils (Owen Omalley via Ashutosh Chauhan) (Revision 
1464227)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1464227
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcUnion.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcFile.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcStruct.java


> ORC's union object inspector returns a type name that isn't parseable by 
> TypeInfoUtils
> --
>
> Key: HIVE-4138
> URL: https://issues.apache.org/jira/browse/HIVE-4138
> Project: Hive
>  Issue Type: Bug
>  Components: Serializers/Deserializers
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Fix For: 0.11.0
>
> Attachments: h-4138.patch, HIVE-4138.D9219.1.patch, 
> HIVE-4138.D9219.2.patch
>
>
> Currently the typename returned by ORC's union object inspector isn't 
> parseable by TypeInfoUtils. The format needs to be union.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4179) NonBlockingOpDeDup does not merge SEL operators correctly

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623016#comment-13623016
 ] 

Hudson commented on HIVE-4179:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4179 : NonBlockingOpDeDup does not merge SEL operators correctly 
(Gunther Hagleitner via Ashutosh Chauhan) (Revision 1464042)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1464042
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/OperatorFactory.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/RowSchema.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/unionproc/UnionProcFactory.java
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_22.q.out
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_23.q.out
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_7.q.out


> NonBlockingOpDeDup does not merge SEL operators correctly
> -
>
> Key: HIVE-4179
> URL: https://issues.apache.org/jira/browse/HIVE-4179
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
>Priority: Critical
> Fix For: 0.11.0
>
> Attachments: HIVE-4179.1.patch, HIVE-4179.2.patch, HIVE-4179.3.patch, 
> HIVE-4179.4.patch
>
>
> The input columns list for SEL operations isn't merged properly in the 
> optimization. The best way to see this is running union_remove_22.q with 
> -Dhadoop.mr.rev=23. The plan shows lost UDFs and a broken lineage for one 
> column.
> Note: union_remove tests do not run on hadoop 1 or 0.20.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4072) Hive eclipse build path update for string template jar

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623018#comment-13623018
 ] 

Hudson commented on HIVE-4072:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4072 Hive eclipse build path update for string template jar
(Vikram Dixit K via namit) (Revision 1451162)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1451162
Files : 
* /hive/trunk/ivy/libraries.properties


> Hive eclipse build path update for string template jar
> --
>
> Key: HIVE-4072
> URL: https://issues.apache.org/jira/browse/HIVE-4072
> Project: Hive
>  Issue Type: Bug
>  Components: Build Infrastructure
>Affects Versions: 0.11.0
>Reporter: Vikram Dixit K
>Assignee: Vikram Dixit K
>Priority: Minor
> Fix For: 0.11.0
>
> Attachments: HIVE-4072.patch
>
>
> StringTemplate jar version needs to be updated for hive to work with eclipse 
> without user intervention.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4272) partition wise metadata does not work for text files

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623015#comment-13623015
 ] 

Hudson commented on HIVE-4272:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4272 partition wise metadata does not work for text files (Revision 
1463594)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1463594
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FetchOperator.java
* /hive/trunk/ql/src/test/queries/clientpositive/partition_wise_fileformat15.q
* /hive/trunk/ql/src/test/queries/clientpositive/partition_wise_fileformat16.q
* 
/hive/trunk/ql/src/test/results/clientpositive/partition_wise_fileformat15.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/partition_wise_fileformat16.q.out
* 
/hive/trunk/serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/ObjectInspectorConverters.java


> partition wise metadata does not work for text files
> 
>
> Key: HIVE-4272
> URL: https://issues.apache.org/jira/browse/HIVE-4272
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Namit Jain
>Assignee: Namit Jain
> Fix For: 0.11.0
>
> Attachments: hive.4272.1.patch, hive.4272.2.patch, 
> hive.4272.2.patch-nohcat
>
>
> The following test fails:
> set hive.input.format = org.apache.hadoop.hive.ql.io.CombineHiveInputFormat;
> -- This tests that the schema can be changed for binary serde data
> create table partition_test_partitioned(key string, value string)
> partitioned by (dt string) stored as textfile;
> insert overwrite table partition_test_partitioned partition(dt='1')
> select * from src where key = 238;
> select * from partition_test_partitioned where dt is not null;
> select key+key, value from partition_test_partitioned where dt is not null;
> alter table partition_test_partitioned change key key int;
> select key+key, value from partition_test_partitioned where dt is not null;
> select * from partition_test_partitioned where dt is not null;
> It works fine for a RCFile

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3775) Unit test failures due to unspecified order of results in "show grant" command

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623014#comment-13623014
 ] 

Hudson commented on HIVE-3775:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-3775 : Unit test failures due to unspecified order of results in show 
grant command (Gunther Hagleitner via Ashutosh Chauhan) (Revision 1451437)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1451437
Files : 
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/QTestUtil.java
* /hive/trunk/ql/src/test/queries/clientnegative/authorization_fail_3.q
* /hive/trunk/ql/src/test/queries/clientnegative/authorization_fail_4.q
* /hive/trunk/ql/src/test/queries/clientnegative/authorization_fail_5.q
* /hive/trunk/ql/src/test/queries/clientnegative/authorization_fail_6.q
* /hive/trunk/ql/src/test/queries/clientnegative/authorization_fail_7.q
* /hive/trunk/ql/src/test/queries/clientnegative/authorization_part.q
* 
/hive/trunk/ql/src/test/queries/clientpositive/alter_rename_partition_authorization.q
* /hive/trunk/ql/src/test/queries/clientpositive/authorization_1.q
* /hive/trunk/ql/src/test/queries/clientpositive/authorization_2.q
* /hive/trunk/ql/src/test/queries/clientpositive/authorization_3.q
* /hive/trunk/ql/src/test/queries/clientpositive/authorization_4.q
* /hive/trunk/ql/src/test/queries/clientpositive/authorization_5.q
* /hive/trunk/ql/src/test/queries/clientpositive/authorization_6.q
* /hive/trunk/ql/src/test/queries/clientpositive/keyword_1.q
* /hive/trunk/ql/src/test/results/clientnegative/authorization_fail_3.q.out
* /hive/trunk/ql/src/test/results/clientnegative/authorization_fail_4.q.out
* /hive/trunk/ql/src/test/results/clientnegative/authorization_fail_5.q.out
* /hive/trunk/ql/src/test/results/clientnegative/authorization_fail_6.q.out
* /hive/trunk/ql/src/test/results/clientnegative/authorization_fail_7.q.out
* /hive/trunk/ql/src/test/results/clientnegative/authorization_part.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/alter_rename_partition_authorization.q.out
* /hive/trunk/ql/src/test/results/clientpositive/authorization_1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/authorization_2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/authorization_3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/authorization_4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/authorization_5.q.out
* /hive/trunk/ql/src/test/results/clientpositive/authorization_6.q.out
* /hive/trunk/ql/src/test/results/clientpositive/keyword_1.q.out


> Unit test failures due to unspecified order of results in "show grant" command
> --
>
> Key: HIVE-3775
> URL: https://issues.apache.org/jira/browse/HIVE-3775
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Fix For: 0.11.0
>
> Attachments: HIVE-3775.1-r1417768.patch, HIVE-3775.2.patch
>
>
> A number of unit tests (sometimes) using "show grant" fail, when run on 
> windows or previous failures put the database in an unexpected state.
> The reason is that the output of "show grant" is not specified to be in any 
> particular order, but the golden files expect it to be.
> The unit test framework should be extended to handled cases like that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4270) bug in hive.map.groupby.sorted in the presence of multiple input partitions

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623013#comment-13623013
 ] 

Hudson commented on HIVE-4270:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4270 bug in hive.map.groupby.sorted in the presence of multiple input 
partitions
(Namit via Gang Tim Liu) (Revision 1463373)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1463373
Files : 
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/GroupByOptimizer.java
* /hive/trunk/ql/src/test/queries/clientpositive/groupby_sort_9.q
* /hive/trunk/ql/src/test/results/clientpositive/groupby_sort_9.q.out


> bug in hive.map.groupby.sorted in the presence of multiple input partitions
> ---
>
> Key: HIVE-4270
> URL: https://issues.apache.org/jira/browse/HIVE-4270
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.11.0
>Reporter: Namit Jain
>Assignee: Namit Jain
> Fix For: 0.11.0
>
> Attachments: hive.4270.1.patch
>
>
> This can lead to wrong results.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4176) disable TestBeeLineDriver in ptest util

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623012#comment-13623012
 ] 

Hudson commented on HIVE-4176:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4176. disable TestBeeLineDriver in ptest util. (kevinwilfong reviewed 
by njain, ashutoshc) (Revision 1456742)

 Result = FAILURE
kevinwilfong : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1456742
Files : 
* /hive/trunk/testutils/ptest/hivetest.py


> disable TestBeeLineDriver in ptest util
> ---
>
> Key: HIVE-4176
> URL: https://issues.apache.org/jira/browse/HIVE-4176
> Project: Hive
>  Issue Type: Sub-task
>  Components: Testing Infrastructure
>Affects Versions: 0.11.0
>Reporter: Kevin Wilfong
>Assignee: Kevin Wilfong
> Fix For: 0.11.0
>
> Attachments: HIVE-4176.1.patch.txt, HIVE-4176.2.patch.txt
>
>
> The test is disabled for ant test, so it should be disabled for ptest as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4174) Round UDF converts BigInts to double

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623010#comment-13623010
 ] 

Hudson commented on HIVE-4174:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4174 Round UDF converts BigInts to double
(Chen Chun via namit) (Revision 1463880)

 Result = FAILURE
namit : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1463880
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFRound.java
* /hive/trunk/ql/src/test/queries/clientpositive/udf_round_3.q
* /hive/trunk/ql/src/test/results/clientpositive/udf_round.q.out
* /hive/trunk/ql/src/test/results/clientpositive/udf_round_3.q.out


> Round UDF converts BigInts to double
> 
>
> Key: HIVE-4174
> URL: https://issues.apache.org/jira/browse/HIVE-4174
> Project: Hive
>  Issue Type: Bug
>  Components: UDF
>Affects Versions: 0.10.0
>Reporter: Mark Grover
>Assignee: Chen Chun
> Fix For: 0.11.0
>
> Attachments: hive.4174.1.patch-nohcat, HIVE-4174.1.patch.txt, 
> HIVE-4174.D9687.1.patch
>
>
> Chen Chun pointed out on the hive-user mailing list that round() in Hive 0.10 
> returns
> {code}
> select round(cast(1234560 as BIGINT)), round(cast(12345670 as BIGINT)) from 
> test limit 1;
> //hive 0.10
> 1234560.0  1.234567E7
> {code}
> This is not consistent with 
> MySQL(http://dev.mysql.com/doc/refman/5.1/en/mathematical-functions.html#function_round)
> which quotes
> {code}
> The return type is the same type as that of the first argument (assuming that 
> it is integer, double, or decimal). This means that for an integer argument, 
> the result is an integer (no decimal places)
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-3464) Merging join tree may reorder joins which could be invalid

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-3464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623009#comment-13623009
 ] 

Hudson commented on HIVE-3464:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-3464 : Merging join tree may reorder joins which could be invalid 
(Navis via Ashutosh Chauhan) (Revision 1464230)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1464230
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
* /hive/trunk/ql/src/test/queries/clientpositive/mergejoins_mixed.q
* /hive/trunk/ql/src/test/queries/clientpositive/smb_mapjoin_17.q
* /hive/trunk/ql/src/test/results/clientpositive/join_filters_overlap.q.out
* /hive/trunk/ql/src/test/results/clientpositive/mergejoins_mixed.q.out
* /hive/trunk/ql/src/test/results/clientpositive/smb_mapjoin_17.q.out


> Merging join tree may reorder joins which could be invalid
> --
>
> Key: HIVE-3464
> URL: https://issues.apache.org/jira/browse/HIVE-3464
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.10.0
>Reporter: Navis
>Assignee: Navis
> Fix For: 0.11.0
>
> Attachments: HIVE-3464.D5409.2.patch, HIVE-3464.D5409.3.patch, 
> HIVE-3464.D5409.4.patch, HIVE-3464.D5409.5.patch, HIVE-3464.D5409.6.patch
>
>
> Currently, hive merges join tree from right to left regardless of join types, 
> which may introduce join reordering. For example,
> select * from a join a b on a.key=b.key join a c on b.key=c.key join a d on 
> a.key=d.key; 
> Hive tries to merge join tree in a-d=b-d, a-d=a-b, b-c=a-b order and a-d=a-b 
> and b-c=a-b will be merged. Final join tree is "a-(bdc)".
> With this, ab-d join will be executed prior to ab-c. But if join type of -c 
> and -d is different, this is not valid.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HIVE-4170) [REGRESSION] FsShell.close closes filesystem, removing temporary directories

2013-04-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13623008#comment-13623008
 ] 

Hudson commented on HIVE-4170:
--

Integrated in Hive-trunk-hadoop2 #138 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/138/])
HIVE-4170 : [REGRESSION] FsShell.close closes filesystem, removing 
temporary directories (Navis via Ashutosh Chauhan) (Revision 1462872)

 Result = FAILURE
hashutosh : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1462872
Files : 
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java


> [REGRESSION] FsShell.close closes filesystem, removing temporary directories
> 
>
> Key: HIVE-4170
> URL: https://issues.apache.org/jira/browse/HIVE-4170
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Reporter: Navis
>Assignee: Navis
> Fix For: 0.11.0
>
> Attachments: HIVE-4170.D9393.1.patch
>
>
> truncate (HIVE-446) closes FileSystem, causing various problems (delete 
> temporary directory for running hive query, etc.).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >