[jira] [Created] (HIVE-6764) when set hive.security.authorization.enabled=true,hive start up with errors.

2014-03-27 Thread haitangfan (JIRA)
haitangfan created HIVE-6764:


 Summary: when set hive.security.authorization.enabled=true,hive 
start up with errors.
 Key: HIVE-6764
 URL: https://issues.apache.org/jira/browse/HIVE-6764
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Reporter: haitangfan






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6764) when set hive.security.authorization.enabled=true,hive start up with errors.

2014-03-27 Thread haitangfan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

haitangfan updated HIVE-6764:
-

Description: 
1、set hive.security.authorization.enabled=true;
2、startup hive,find some errors in the log:
notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh 
prepare]/returns: FAILED:AuthorizationException No privilege 'Create' found for 
inputs { database:default,
table:hcatsmokeida8c00f0b_date432614}
err: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh 
prepare]/returns: change from notrun to 0
failed: su - ambari-qa -c 'sh /tmp/hcatSmoke.sh hcatsmokeida8c00f0b_date432614 
prepare' returned 64 instead of
one of [0] at 
/var/lib/ambari-agent/puppet/modules/hdp-hcat/manifests/hcat/service_check.pp:54
notice: 
/Stage[2]/Hdp-hive::Hive::Service_check/Exec[/tmp/hiveserver2Smoke.sh]/returns: 
Smoke test of
hiveserver2 passed
notice: 
/Stage[2]/Hdp-hive::Hive::Service_check/Exec[/tmp/hiveserver2Smoke.sh]/returns: 
executed successfully

3、the /tmp/hcatSmoke.sh contents
export tablename=$1case $2 in
prepare)
  hcat -e show tables
  hcat -e drop table IF EXISTS ${tablename}
  hcat -e create table ${tablename} ( id INT, name string ) stored as rcfile ;
;;
cleanup)
  hcat -e drop table IF EXISTS ${tablename}
;;
esac

4、try to grant to user ambari-qa,faild.
hive grant all on database default to user ambari-qa;FAILED: ParseException 
line 1:44 missing EOF at '-' near 'ambari'

How to fix it?

 when set hive.security.authorization.enabled=true,hive start up with errors.
 

 Key: HIVE-6764
 URL: https://issues.apache.org/jira/browse/HIVE-6764
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Reporter: haitangfan

 1、set hive.security.authorization.enabled=true;
 2、startup hive,find some errors in the log:
 notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh 
 prepare]/returns: FAILED:AuthorizationException No privilege 'Create' found 
 for inputs { database:default,
 table:hcatsmokeida8c00f0b_date432614}
 err: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh 
 prepare]/returns: change from notrun to 0
 failed: su - ambari-qa -c 'sh /tmp/hcatSmoke.sh 
 hcatsmokeida8c00f0b_date432614 prepare' returned 64 instead of
 one of [0] at 
 /var/lib/ambari-agent/puppet/modules/hdp-hcat/manifests/hcat/service_check.pp:54
 notice: 
 /Stage[2]/Hdp-hive::Hive::Service_check/Exec[/tmp/hiveserver2Smoke.sh]/returns:
  Smoke test of
 hiveserver2 passed
 notice: 
 /Stage[2]/Hdp-hive::Hive::Service_check/Exec[/tmp/hiveserver2Smoke.sh]/returns:
  executed successfully
 3、the /tmp/hcatSmoke.sh contents
 export tablename=$1case $2 in
 prepare)
   hcat -e show tables
   hcat -e drop table IF EXISTS ${tablename}
   hcat -e create table ${tablename} ( id INT, name string ) stored as rcfile 
 ;
 ;;
 cleanup)
   hcat -e drop table IF EXISTS ${tablename}
 ;;
 esac
 4、try to grant to user ambari-qa,faild.
 hive grant all on database default to user ambari-qa;FAILED: ParseException 
 line 1:44 missing EOF at '-' near 'ambari'
 How to fix it?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6492) limit partition number involved in a table scan

2014-03-27 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948934#comment-13948934
 ] 

Lefty Leverenz commented on HIVE-6492:
--

This adds *hive.limit.query.max.table.partition* to HiveConf.java but it needs 
a description.  There's plenty of description in the comments, but a release 
note would be helpful.  Then I could put it in the wiki, and make sure the 
description goes into the new HiveConf.java (via HIVE-6586) after HIVE-6037 
gets committed.

 limit partition number involved in a table scan
 ---

 Key: HIVE-6492
 URL: https://issues.apache.org/jira/browse/HIVE-6492
 Project: Hive
  Issue Type: New Feature
  Components: Query Processor
Affects Versions: 0.12.0
Reporter: Selina Zhang
Assignee: Selina Zhang
 Fix For: 0.13.0

 Attachments: HIVE-6492.1.patch.txt, HIVE-6492.2.patch.txt, 
 HIVE-6492.3.patch.txt, HIVE-6492.4.patch.txt, HIVE-6492.4.patch_suggestion, 
 HIVE-6492.5.patch.txt, HIVE-6492.6.patch.txt, HIVE-6492.7.parch.txt

   Original Estimate: 24h
  Remaining Estimate: 24h

 To protect the cluster, a new configure variable 
 hive.limit.query.max.table.partition is added to hive configuration to
 limit the table partitions involved in a table scan. 
 The default value will be set to -1 which means there is no limit by default. 
 This variable will not affect metadata only query.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6697) HiveServer2 secure thrift/http authentication needs to support SPNego

2014-03-27 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-6697:
---

Status: Open  (was: Patch Available)

 HiveServer2 secure thrift/http authentication needs to support SPNego 
 --

 Key: HIVE-6697
 URL: https://issues.apache.org/jira/browse/HIVE-6697
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Reporter: Dilli Arumugam
Assignee: Dilli Arumugam
 Attachments: HIVE-6697.1.patch, HIVE-6697.2.patch, HIVE-6697.3.patch, 
 HIVE-6697.4.patch, hive-6697-req-impl-verify.md


 Looking to integrating Apache Knox to work with HiveServer2 secure 
 thrift/http.
 Found that thrift/http uses some form of Kerberos authentication that is not 
 SPNego. Considering it is going over http protocol, expected it to use SPNego 
 protocol.
 Apache Knox is already integrated with WebHDFS, WebHCat, Oozie and HBase 
 Stargate using SPNego for authentication.
 Requesting that HiveServer2 secure thrift/http authentication support SPNego.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6697) HiveServer2 secure thrift/http authentication needs to support SPNego

2014-03-27 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-6697:
---

Status: Patch Available  (was: Open)

 HiveServer2 secure thrift/http authentication needs to support SPNego 
 --

 Key: HIVE-6697
 URL: https://issues.apache.org/jira/browse/HIVE-6697
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Reporter: Dilli Arumugam
Assignee: Dilli Arumugam
 Attachments: HIVE-6697.1.patch, HIVE-6697.2.patch, HIVE-6697.3.patch, 
 HIVE-6697.4.patch, hive-6697-req-impl-verify.md


 Looking to integrating Apache Knox to work with HiveServer2 secure 
 thrift/http.
 Found that thrift/http uses some form of Kerberos authentication that is not 
 SPNego. Considering it is going over http protocol, expected it to use SPNego 
 protocol.
 Apache Knox is already integrated with WebHDFS, WebHCat, Oozie and HBase 
 Stargate using SPNego for authentication.
 Requesting that HiveServer2 secure thrift/http authentication support SPNego.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6697) HiveServer2 secure thrift/http authentication needs to support SPNego

2014-03-27 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-6697:
---

Attachment: HIVE-6697.4.patch

[~darumugam] The v3 failed to apply on trunk. I'm attaching your patch rebased 
on trunk.

 HiveServer2 secure thrift/http authentication needs to support SPNego 
 --

 Key: HIVE-6697
 URL: https://issues.apache.org/jira/browse/HIVE-6697
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Reporter: Dilli Arumugam
Assignee: Dilli Arumugam
 Attachments: HIVE-6697.1.patch, HIVE-6697.2.patch, HIVE-6697.3.patch, 
 HIVE-6697.4.patch, hive-6697-req-impl-verify.md


 Looking to integrating Apache Knox to work with HiveServer2 secure 
 thrift/http.
 Found that thrift/http uses some form of Kerberos authentication that is not 
 SPNego. Considering it is going over http protocol, expected it to use SPNego 
 protocol.
 Apache Knox is already integrated with WebHDFS, WebHCat, Oozie and HBase 
 Stargate using SPNego for authentication.
 Requesting that HiveServer2 secure thrift/http authentication support SPNego.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6638) Hive needs to implement recovery for Application Master restart

2014-03-27 Thread Mohammad Kamrul Islam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mohammad Kamrul Islam updated HIVE-6638:


Status: Patch Available  (was: Open)

 Hive needs to implement recovery for Application Master restart 
 

 Key: HIVE-6638
 URL: https://issues.apache.org/jira/browse/HIVE-6638
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Affects Versions: 0.12.0, 0.11.0, 0.13.0
Reporter: Ashutosh Chauhan
Assignee: Mohammad Kamrul Islam
 Attachments: HIVE-6638.1.patch


 Currently, if AM restarts, whole job is restarted. Although, job and 
 subsequently query would still finish to completion, it would be nice if Hive 
 don't need to redo all the work done under previous AM.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6743) Allow specifying the log level for Tez tasks

2014-03-27 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948945#comment-13948945
 ] 

Lefty Leverenz commented on HIVE-6743:
--

For the record:  this adds *hive.tez.log.level* to HiveConf.java and 
hive-default.xml.template.

After HIVE-6037 gets committed, the description in hive-default.xml.template 
can be merged into the new HiveConf.java (via HIVE-6586).

 Allow specifying the log level for Tez tasks
 

 Key: HIVE-6743
 URL: https://issues.apache.org/jira/browse/HIVE-6743
 Project: Hive
  Issue Type: Improvement
Reporter: Siddharth Seth
Assignee: Siddharth Seth
 Attachments: HIVE-6743.1.patch, HIVE-6743.2.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 19599: HiveServer2 secure thrift/http authentication needs to support SPNego

2014-03-27 Thread Prasad Mujumdar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19599/#review38701
---


Overall looks fine to me. A couple minor of questions/comments below. Don't 
have to be address as part of this patch. Thanks!


service/src/java/org/apache/hive/service/cli/CLIService.java
https://reviews.apache.org/r/19599/#comment70996

Just curious, do the two principals need to be different ? Can't the same 
user run the service as well as authenticate with Knox ?




service/src/java/org/apache/hive/service/cli/CLIService.java
https://reviews.apache.org/r/19599/#comment70997

Should this throw an exception instead or warning ?


- Prasad Mujumdar


On March 26, 2014, 2:38 a.m., dilli dorai wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/19599/
 ---
 
 (Updated March 26, 2014, 2:38 a.m.)
 
 
 Review request for hive, Ashutosh Chauhan, Thejas Nair, and Vaibhav Gumashta.
 
 
 Bugs: HIVE-6697
 https://issues.apache.org/jira/browse/HIVE-6697
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 See JIra for description
 https://issues.apache.org/jira/browse/HIVE-6697
 
 
 Diffs
 -
 
   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java affcbb4 
   conf/hive-default.xml.template 3c3df43 
   service/src/java/org/apache/hive/service/auth/HiveAuthFactory.java 6e6a47d 
   service/src/java/org/apache/hive/service/cli/CLIService.java e31a74e 
   
 service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpCLIService.java 
 cb01cfd 
   service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java 
 255a165 
   shims/0.20/src/main/java/org/apache/hadoop/hive/shims/Hadoop20Shims.java 
 9aa555a 
   
 shims/common-secure/src/main/java/org/apache/hadoop/hive/shims/HadoopShimsSecure.java
  d4cddda 
   shims/common/src/main/java/org/apache/hadoop/hive/shims/HadoopShims.java 
 ed951f1 
 
 Diff: https://reviews.apache.org/r/19599/diff/
 
 
 Testing
 ---
 
 ## Verification of enhancement with Beeline/JDBC 
 
 ### Verified the following calls succeeded getting connection, and listig 
 tables, 
 when valid spnego.principal and spengo.keytab are specified in hive-site.xml,
 and the client has KINITed and has a valid kerberos ticket in cache
 
 
 !connect 
 jdbc:hive2://hdps.example.com:10001/default;principal=hive/hdps.example@example.com?hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice
   dummy dummy-pass org.apache.hive.jdbc.HiveDriver 
 
 
 !connect 
 jdbc:hive2://hdps.example.com:10001/default;principal=HTTP/hdps.example@example.com?hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice
   dummy dummy-pass org.apache.hive.jdbc.HiveDriver 
 
 ### Verified the following call succeeded getting connection, and listig 
 tables, 
 even if valid spnego.principal or valid spengo.keytab is not  specified in 
 hive-site.xml,
 as long as valid hive server2 kerberos principal and keytab are specified in 
 hive-site.xml,
 and the client has KINITed and has a valid kerberos ticket in cache
 
 !connect 
 jdbc:hive2://hdps.example.com:10001/default;principal=hive/hdps.example@example.com?hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice
   dummy dummy-pass org.apache.hive.jdbc.HiveDriver 
 
 ### Verified the following call failed  getting connection, 
 when valid  spnego.principal or valid spengo.keytab is not specified in 
 hive-site.xml
 
 !connect 
 jdbc:hive2://hdps.example.com:10001/default;principal=HTTP/hdps.example@example.com?hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice
   dummy dummy-pass org.apache.hive.jdbc.HiveDriver 
 
 ## Verification of enhancement with Apache Knox
 
 Apache Knox was able to authenticate to hive server 2 as SPNego client using 
 Apache HttpClient,
 and list tables, when correct spnego.principal and spengo.keytab are 
 specified in hive-site.xml
 
 Apache Knox was not able to authenticate to hive server 2 as SPNego client 
 using Apache HttpClient,
 when valid spnego.principal or spengo.keytab is not specified in hive-site.xml
 
 ## Verification of enhancement with curl
 
 ### when valid spnego.principal and spengo.keytab are specified in 
 hive-site.xml
 and the client has KINITed and has a valid kerberos ticket in cache
 
 curl -i --negotiate -u : http://hdps.example.com:10001/cliservice
 
 SPNego authentication succeeded and got a HTTP status code 500,
 since we did not end Thrift body content
 
 ### when valid spnego.principal and spengo.keytab are specified in 
 hive-site.xml
 and the client has not KINITed and does not have a  valid kerberos ticket in 
 cache
 
 curl -i --negotiate -u : http://hdps.example.com:10001/cliservice
 
 url -i --negotiate 

[jira] [Updated] (HIVE-6752) Vectorized Between and IN expressions don't work with decimal, date types.

2014-03-27 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HIVE-6752:
---

Attachment: HIVE-6752.1.patch

 Vectorized Between and IN expressions don't work with decimal, date types.
 --

 Key: HIVE-6752
 URL: https://issues.apache.org/jira/browse/HIVE-6752
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.0
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: HIVE-6752.1.patch


 Vectorized Between and IN expressions don't work with decimal, date types.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6752) Vectorized Between and IN expressions don't work with decimal, date types.

2014-03-27 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HIVE-6752:
---

Status: Patch Available  (was: Open)

 Vectorized Between and IN expressions don't work with decimal, date types.
 --

 Key: HIVE-6752
 URL: https://issues.apache.org/jira/browse/HIVE-6752
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.0
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: HIVE-6752.1.patch


 Vectorized Between and IN expressions don't work with decimal, date types.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Review Request 19718: Vectorized Between and IN expressions don't work with decimal, date types.

2014-03-27 Thread Jitendra Pandey

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19718/
---

Review request for hive and Eric Hanson.


Bugs: HIVE-6752
https://issues.apache.org/jira/browse/HIVE-6752


Repository: hive-git


Description
---

Vectorized Between and IN expressions don't work with decimal, date types.


Diffs
-

  ant/src/org/apache/hadoop/hive/ant/GenVectorCode.java 44b0c59 
  ql/src/gen/vectorization/ExpressionTemplates/FilterDecimalColumnBetween.txt 
PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizationContext.java 
96e74a9 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/CastDateToString.java
 PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/DecimalColumnInList.java
 PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterDecimalColumnInList.java
 PRE-CREATION 
  
ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/IDecimalInExpr.java
 PRE-CREATION 
  ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/Vectorizer.java 
c2240c0 
  ql/src/test/queries/clientpositive/vector_between_in.q PRE-CREATION 
  ql/src/test/results/clientpositive/vector_between_in.q.out PRE-CREATION 

Diff: https://reviews.apache.org/r/19718/diff/


Testing
---


Thanks,

Jitendra Pandey



[jira] [Commented] (HIVE-6752) Vectorized Between and IN expressions don't work with decimal, date types.

2014-03-27 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13948981#comment-13948981
 ] 

Jitendra Nath Pandey commented on HIVE-6752:


Review board entry: https://reviews.apache.org/r/19718/

 Vectorized Between and IN expressions don't work with decimal, date types.
 --

 Key: HIVE-6752
 URL: https://issues.apache.org/jira/browse/HIVE-6752
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.0
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: HIVE-6752.1.patch


 Vectorized Between and IN expressions don't work with decimal, date types.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6625) HiveServer2 running in http mode should support trusted proxy access

2014-03-27 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949000#comment-13949000
 ] 

Vaibhav Gumashta commented on HIVE-6625:


[~leftylev] Thanks for the nudge! There's a bunch of work going in for the http 
mode of HiveServer2. I'll update the wiki cumulatively in a few days.

 HiveServer2 running in http mode should support trusted proxy access
 

 Key: HIVE-6625
 URL: https://issues.apache.org/jira/browse/HIVE-6625
 Project: Hive
  Issue Type: Sub-task
  Components: HiveServer2
Affects Versions: 0.13.0
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.13.0

 Attachments: HIVE-6625.1.patch, HIVE-6625.2.patch


 HIVE-5155 adds trusted proxy access to HiveServer2. This patch a minor change 
 to have it used when running HiveServer2 in http mode. Patch to be applied on 
 top of HIVE-4764  HIVE-5155.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6753) Unions on Tez NPE when there's a mapjoin the union work

2014-03-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949015#comment-13949015
 ] 

Hive QA commented on HIVE-6753:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12636867/HIVE-6753.1.patch

{color:green}SUCCESS:{color} +1 5491 tests passed

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1976/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1976/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12636867

 Unions on Tez NPE when there's a mapjoin the union work
 ---

 Key: HIVE-6753
 URL: https://issues.apache.org/jira/browse/HIVE-6753
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Attachments: HIVE-6753.1.patch


 In some cases when there's a mapjoin in union work we need to broadcast the 
 same result set to multiple downstream work items. This causes a vertex 
 failure right now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6765) ASTNodeOrigin unserializable leads to fail when join with view

2014-03-27 Thread Adrian Wang (JIRA)
Adrian Wang created HIVE-6765:
-

 Summary: ASTNodeOrigin unserializable leads to fail when join with 
view
 Key: HIVE-6765
 URL: https://issues.apache.org/jira/browse/HIVE-6765
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.12.0
Reporter: Adrian Wang


when a view contains a UDF, and the view comes into a JOIN operation, Hive will 
encounter a bug with stack trace like
Caused by: java.lang.InstantiationException: 
org.apache.hadoop.hive.ql.parse.ASTNodeOrigin
at java.lang.Class.newInstance0(Class.java:359)
at java.lang.Class.newInstance(Class.java:327)
at sun.reflect.GeneratedMethodAccessor84.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6765) ASTNodeOrigin unserializable leads to fail when join with view

2014-03-27 Thread Adrian Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949052#comment-13949052
 ] 

Adrian Wang commented on HIVE-6765:
---

I added a PersistenceDelegate in serializeObject() in Class Utilities and 
resolved the problem. later I'll attach the patch.

 ASTNodeOrigin unserializable leads to fail when join with view
 --

 Key: HIVE-6765
 URL: https://issues.apache.org/jira/browse/HIVE-6765
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.12.0
Reporter: Adrian Wang

 when a view contains a UDF, and the view comes into a JOIN operation, Hive 
 will encounter a bug with stack trace like
 Caused by: java.lang.InstantiationException: 
 org.apache.hadoop.hive.ql.parse.ASTNodeOrigin
   at java.lang.Class.newInstance0(Class.java:359)
   at java.lang.Class.newInstance(Class.java:327)
   at sun.reflect.GeneratedMethodAccessor84.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:616)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6765) ASTNodeOrigin unserializable leads to fail when join with view

2014-03-27 Thread Adrian Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Wang updated HIVE-6765:
--

Attachment: HIVE-6765.patch.1

 ASTNodeOrigin unserializable leads to fail when join with view
 --

 Key: HIVE-6765
 URL: https://issues.apache.org/jira/browse/HIVE-6765
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.12.0
Reporter: Adrian Wang
 Attachments: HIVE-6765.patch.1


 when a view contains a UDF, and the view comes into a JOIN operation, Hive 
 will encounter a bug with stack trace like
 Caused by: java.lang.InstantiationException: 
 org.apache.hadoop.hive.ql.parse.ASTNodeOrigin
   at java.lang.Class.newInstance0(Class.java:359)
   at java.lang.Class.newInstance(Class.java:327)
   at sun.reflect.GeneratedMethodAccessor84.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:616)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6765) ASTNodeOrigin unserializable leads to fail when join with view

2014-03-27 Thread Adrian Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949075#comment-13949075
 ] 

Adrian Wang commented on HIVE-6765:
---

Here's an example to see the Exception:
CREATE TABLE t1 (a1 INT, b1 INT);
CREATE VIEW v1 (x1) AS SELECT MAX(a1) FROM t1;
SELECT s1.x1 FROM v1 s1 JOIN (SELECT MAX(a1) AS ma FROM t1) s2 ON s1.x1 = s2.ma;

This is a bug on both ApacheHive and Tez, outputing return code 1 ...

 ASTNodeOrigin unserializable leads to fail when join with view
 --

 Key: HIVE-6765
 URL: https://issues.apache.org/jira/browse/HIVE-6765
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.12.0
Reporter: Adrian Wang
 Attachments: HIVE-6765.patch.1


 when a view contains a UDF, and the view comes into a JOIN operation, Hive 
 will encounter a bug with stack trace like
 Caused by: java.lang.InstantiationException: 
 org.apache.hadoop.hive.ql.parse.ASTNodeOrigin
   at java.lang.Class.newInstance0(Class.java:359)
   at java.lang.Class.newInstance(Class.java:327)
   at sun.reflect.GeneratedMethodAccessor84.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:616)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6765) ASTNodeOrigin unserializable leads to fail when join with view

2014-03-27 Thread Adrian Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949076#comment-13949076
 ] 

Adrian Wang commented on HIVE-6765:
---

And I think this is just another drawback for using XMLEncoder to clone plan.

 ASTNodeOrigin unserializable leads to fail when join with view
 --

 Key: HIVE-6765
 URL: https://issues.apache.org/jira/browse/HIVE-6765
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.12.0
Reporter: Adrian Wang
 Attachments: HIVE-6765.patch.1


 when a view contains a UDF, and the view comes into a JOIN operation, Hive 
 will encounter a bug with stack trace like
 Caused by: java.lang.InstantiationException: 
 org.apache.hadoop.hive.ql.parse.ASTNodeOrigin
   at java.lang.Class.newInstance0(Class.java:359)
   at java.lang.Class.newInstance(Class.java:327)
   at sun.reflect.GeneratedMethodAccessor84.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:616)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6129) alter exchange is implemented in inverted manner

2014-03-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949102#comment-13949102
 ] 

Hive QA commented on HIVE-6129:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12636955/HIVE-6129.2.patch

{color:green}SUCCESS:{color} +1 5491 tests passed

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1977/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1977/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12636955

 alter exchange is implemented in inverted manner
 

 Key: HIVE-6129
 URL: https://issues.apache.org/jira/browse/HIVE-6129
 Project: Hive
  Issue Type: Bug
Reporter: Navis
Assignee: Navis
Priority: Critical
 Attachments: HIVE-6129.1.patch.txt, HIVE-6129.2.patch


 see 
 https://issues.apache.org/jira/browse/HIVE-4095?focusedCommentId=13819885page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13819885
 alter exchange should be implemented accord to document in 
 https://cwiki.apache.org/confluence/display/Hive/Exchange+Partition. i.e 
 {code}
 alter table T1 exchange partition (ds='1') with table T2 
 {code}
 should be (after creating T1@ds=1) 
 {quote}
 moves the data from T2 to T1@ds=1 
 {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6765) ASTNodeOrigin unserializable leads to fail when join with view

2014-03-27 Thread Adrian Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949113#comment-13949113
 ] 

Adrian Wang commented on HIVE-6765:
---

Sorry, the previous example works on Tez with hive-0.13.
But it fails when I run the query in Hive-0.12 in eclipse.

 ASTNodeOrigin unserializable leads to fail when join with view
 --

 Key: HIVE-6765
 URL: https://issues.apache.org/jira/browse/HIVE-6765
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.12.0
Reporter: Adrian Wang
 Attachments: HIVE-6765.patch.1


 when a view contains a UDF, and the view comes into a JOIN operation, Hive 
 will encounter a bug with stack trace like
 Caused by: java.lang.InstantiationException: 
 org.apache.hadoop.hive.ql.parse.ASTNodeOrigin
   at java.lang.Class.newInstance0(Class.java:359)
   at java.lang.Class.newInstance(Class.java:327)
   at sun.reflect.GeneratedMethodAccessor84.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:616)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6765) ASTNodeOrigin unserializable leads to fail when join with view

2014-03-27 Thread Adrian Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949118#comment-13949118
 ] 

Adrian Wang commented on HIVE-6765:
---

when I run the test case in hive command line(0.12-release), the full output is 
as follows,
hive SELECT s1.x1 FROM v1 s1 JOIN (SELECT MAX(a1) AS ma FROM t1) s2 ON s1.x1 = 
s2.ma;
java.lang.RuntimeException: Cannot serialize object
at 
org.apache.hadoop.hive.ql.exec.Utilities$1.exceptionThrown(Utilities.java:652)
at java.beans.XMLEncoder.writeStatement(XMLEncoder.java:361)
at java.beans.XMLEncoder.writeObject(XMLEncoder.java:277)
at 
org.apache.hadoop.hive.ql.exec.Utilities.serializeObject(Utilities.java:666)
at 
org.apache.hadoop.hive.ql.exec.Utilities.clonePlan(Utilities.java:637)
at 
org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinTaskDispatcher.processCurrentTask(CommonJoinTaskDispatcher.java:505)
at 
org.apache.hadoop.hive.ql.optimizer.physical.AbstractJoinTaskDispatcher.dispatch(AbstractJoinTaskDispatcher.java:182)
at 
org.apache.hadoop.hive.ql.lib.TaskGraphWalker.dispatch(TaskGraphWalker.java:111)
at 
org.apache.hadoop.hive.ql.lib.TaskGraphWalker.walk(TaskGraphWalker.java:194)
at 
org.apache.hadoop.hive.ql.lib.TaskGraphWalker.startWalking(TaskGraphWalker.java:139)
at 
org.apache.hadoop.hive.ql.optimizer.physical.CommonJoinResolver.resolve(CommonJoinResolver.java:79)
at 
org.apache.hadoop.hive.ql.optimizer.physical.PhysicalOptimizer.optimize(PhysicalOptimizer.java:90)
at 
org.apache.hadoop.hive.ql.parse.MapReduceCompiler.compile(MapReduceCompiler.java:300)
at 
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8410)
at 
org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:284)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:441)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:342)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:977)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:888)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
at 
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:781)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:616)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: java.lang.Exception: XMLEncoder: discarding statement 
XMLEncoder.writeObject(MapredWork);
... 29 more
Caused by: java.lang.RuntimeException: Cannot serialize object
at 
org.apache.hadoop.hive.ql.exec.Utilities$1.exceptionThrown(Utilities.java:652)
at 
java.beans.DefaultPersistenceDelegate.initBean(DefaultPersistenceDelegate.java:267)
at 
java.beans.DefaultPersistenceDelegate.initialize(DefaultPersistenceDelegate.java:408)
at 
java.beans.PersistenceDelegate.writeObject(PersistenceDelegate.java:116)
at java.beans.Encoder.writeObject(Encoder.java:74)
at java.beans.XMLEncoder.writeObject(XMLEncoder.java:274)
at java.beans.Encoder.writeExpression(Encoder.java:304)
at java.beans.XMLEncoder.writeExpression(XMLEncoder.java:389)
at 
java.beans.PersistenceDelegate.writeObject(PersistenceDelegate.java:113)
at java.beans.Encoder.writeObject(Encoder.java:74)
at java.beans.XMLEncoder.writeObject(XMLEncoder.java:274)
at java.beans.Encoder.writeObject1(Encoder.java:231)
at java.beans.Encoder.cloneStatement(Encoder.java:244)
at java.beans.Encoder.writeStatement(Encoder.java:275)
at java.beans.XMLEncoder.writeStatement(XMLEncoder.java:348)
... 28 more
Caused by: java.lang.RuntimeException: Cannot serialize object
at 
org.apache.hadoop.hive.ql.exec.Utilities$1.exceptionThrown(Utilities.java:652)
at 
java.beans.DefaultPersistenceDelegate.initBean(DefaultPersistenceDelegate.java:267)
at 
java.beans.DefaultPersistenceDelegate.initialize(DefaultPersistenceDelegate.java:408)
at 
java.beans.PersistenceDelegate.writeObject(PersistenceDelegate.java:116)
at java.beans.Encoder.writeObject(Encoder.java:74)
at java.beans.XMLEncoder.writeObject(XMLEncoder.java:274)
at 

[jira] [Commented] (HIVE-6314) The logging (progress reporting) is too verbose

2014-03-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949190#comment-13949190
 ] 

Hive QA commented on HIVE-6314:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12636958/HIVE-6314.2.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5491 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_auto_sortmerge_join_16
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1979/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1979/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12636958

 The logging (progress reporting) is too verbose
 ---

 Key: HIVE-6314
 URL: https://issues.apache.org/jira/browse/HIVE-6314
 Project: Hive
  Issue Type: Bug
Reporter: Sam
Assignee: Navis
  Labels: logger
 Attachments: HIVE-6314.1.patch.txt, HIVE-6314.2.patch


 The progress report is issued every second even when no progress have been 
 made:
 {code}
 2014-01-27 10:35:55,209 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 6.68 
 sec
 2014-01-27 10:35:56,678 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 6.68 
 sec
 2014-01-27 10:35:59,344 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 6.68 
 sec
 2014-01-27 10:36:01,268 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 
 8.67 sec
 2014-01-27 10:36:03,149 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 
 8.67 sec
 {code}
 This pollutes the logs and the screen, and people do not appreciate it as 
 much as the designers might have thought 
 ([http://stackoverflow.com/questions/20849289/how-do-i-limit-log-verbosity-of-hive],
  
 [http://stackoverflow.com/questions/14121543/controlling-the-level-of-verbosity-in-hive]).
 It would be nice to be able to control the level of verbosity (but *not* by 
 the {{-v}} switch!):
 # Make sure that the progress report is only issued where there is something 
 new to report; or
 # Remove all the progress messages; or
 # Make sure that progress is reported only every X sec (instead of every 1 
 second)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6685) Beeline throws ArrayIndexOutOfBoundsException for mismatched arguments

2014-03-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949304#comment-13949304
 ] 

Hive QA commented on HIVE-6685:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12636996/HIVE-6685.4.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5499 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testNegativeCliDriver_mapreduce_stack_trace_hadoop20
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1980/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1980/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12636996

 Beeline throws ArrayIndexOutOfBoundsException for mismatched arguments
 --

 Key: HIVE-6685
 URL: https://issues.apache.org/jira/browse/HIVE-6685
 Project: Hive
  Issue Type: Bug
  Components: CLI
Affects Versions: 0.12.0
Reporter: Szehon Ho
Assignee: Szehon Ho
 Attachments: HIVE-6685.2.patch, HIVE-6685.3.patch, HIVE-6685.4.patch, 
 HIVE-6685.patch


 Noticed that there is an ugly ArrayIndexOutOfBoundsException for mismatched 
 arguments in beeline prompt.  It would be nice to cleanup.
 Example:
 {noformat}
 beeline -u szehon -p
 Exception in thread main java.lang.ArrayIndexOutOfBoundsException: 3
   at org.apache.hive.beeline.BeeLine.initArgs(BeeLine.java:560)
   at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:628)
   at 
 org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:366)
   at org.apache.hive.beeline.BeeLine.main(BeeLine.java:349)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 19599: HiveServer2 secure thrift/http authentication needs to support SPNego

2014-03-27 Thread dilli dorai


 On March 27, 2014, 6:54 a.m., Prasad Mujumdar wrote:
  service/src/java/org/apache/hive/service/cli/CLIService.java, line 96
  https://reviews.apache.org/r/19599/diff/3/?file=537238#file537238line96
 
  Just curious, do the two principals need to be different ? Can't the 
  same user run the service as well as authenticate with Knox ?
 

Thanks Prasad for review.

Per SPNego prootocol, HTTP Client expects the HTTP Service principal to be of 
the form HTTP/HOST@DOMAIN.

HTTP/HOST@DOMAIN principal is used for mutual authentication with HTTP 
Client.

hive/HOST@DOMAIN is used for mutual authentication with other Hadoop 
services(non HTTP).


 On March 27, 2014, 6:54 a.m., Prasad Mujumdar wrote:
  service/src/java/org/apache/hive/service/cli/CLIService.java, line 106
  https://reviews.apache.org/r/19599/diff/3/?file=537238#file537238line106
 
  Should this throw an exception instead or warning ?

This could throw exception if we make SPNego the only supported mutual 
authentication for HTTP Client.
Ar present, we are keeping it optional and continue to support existing non 
standard Kerberos over HTTP authentication.
Hence, warning instead of exception.


- dilli


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19599/#review38701
---


On March 26, 2014, 2:38 a.m., dilli dorai wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/19599/
 ---
 
 (Updated March 26, 2014, 2:38 a.m.)
 
 
 Review request for hive, Ashutosh Chauhan, Thejas Nair, and Vaibhav Gumashta.
 
 
 Bugs: HIVE-6697
 https://issues.apache.org/jira/browse/HIVE-6697
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 See JIra for description
 https://issues.apache.org/jira/browse/HIVE-6697
 
 
 Diffs
 -
 
   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java affcbb4 
   conf/hive-default.xml.template 3c3df43 
   service/src/java/org/apache/hive/service/auth/HiveAuthFactory.java 6e6a47d 
   service/src/java/org/apache/hive/service/cli/CLIService.java e31a74e 
   
 service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpCLIService.java 
 cb01cfd 
   service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java 
 255a165 
   shims/0.20/src/main/java/org/apache/hadoop/hive/shims/Hadoop20Shims.java 
 9aa555a 
   
 shims/common-secure/src/main/java/org/apache/hadoop/hive/shims/HadoopShimsSecure.java
  d4cddda 
   shims/common/src/main/java/org/apache/hadoop/hive/shims/HadoopShims.java 
 ed951f1 
 
 Diff: https://reviews.apache.org/r/19599/diff/
 
 
 Testing
 ---
 
 ## Verification of enhancement with Beeline/JDBC 
 
 ### Verified the following calls succeeded getting connection, and listig 
 tables, 
 when valid spnego.principal and spengo.keytab are specified in hive-site.xml,
 and the client has KINITed and has a valid kerberos ticket in cache
 
 
 !connect 
 jdbc:hive2://hdps.example.com:10001/default;principal=hive/hdps.example@example.com?hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice
   dummy dummy-pass org.apache.hive.jdbc.HiveDriver 
 
 
 !connect 
 jdbc:hive2://hdps.example.com:10001/default;principal=HTTP/hdps.example@example.com?hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice
   dummy dummy-pass org.apache.hive.jdbc.HiveDriver 
 
 ### Verified the following call succeeded getting connection, and listig 
 tables, 
 even if valid spnego.principal or valid spengo.keytab is not  specified in 
 hive-site.xml,
 as long as valid hive server2 kerberos principal and keytab are specified in 
 hive-site.xml,
 and the client has KINITed and has a valid kerberos ticket in cache
 
 !connect 
 jdbc:hive2://hdps.example.com:10001/default;principal=hive/hdps.example@example.com?hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice
   dummy dummy-pass org.apache.hive.jdbc.HiveDriver 
 
 ### Verified the following call failed  getting connection, 
 when valid  spnego.principal or valid spengo.keytab is not specified in 
 hive-site.xml
 
 !connect 
 jdbc:hive2://hdps.example.com:10001/default;principal=HTTP/hdps.example@example.com?hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice
   dummy dummy-pass org.apache.hive.jdbc.HiveDriver 
 
 ## Verification of enhancement with Apache Knox
 
 Apache Knox was able to authenticate to hive server 2 as SPNego client using 
 Apache HttpClient,
 and list tables, when correct spnego.principal and spengo.keytab are 
 specified in hive-site.xml
 
 Apache Knox was not able to authenticate to hive server 2 as SPNego client 
 using Apache HttpClient,
 when valid spnego.principal or spengo.keytab is not specified in hive-site.xml
 
 ## Verification of 

Re: Review Request 19599: HiveServer2 secure thrift/http authentication needs to support SPNego

2014-03-27 Thread dilli dorai

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19599/#review38573
---



conf/hive-default.xml.template
https://reviews.apache.org/r/19599/#comment70848

fixed


- dilli dorai


On March 26, 2014, 2:38 a.m., dilli dorai wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/19599/
 ---
 
 (Updated March 26, 2014, 2:38 a.m.)
 
 
 Review request for hive, Ashutosh Chauhan, Thejas Nair, and Vaibhav Gumashta.
 
 
 Bugs: HIVE-6697
 https://issues.apache.org/jira/browse/HIVE-6697
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 See JIra for description
 https://issues.apache.org/jira/browse/HIVE-6697
 
 
 Diffs
 -
 
   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java affcbb4 
   conf/hive-default.xml.template 3c3df43 
   service/src/java/org/apache/hive/service/auth/HiveAuthFactory.java 6e6a47d 
   service/src/java/org/apache/hive/service/cli/CLIService.java e31a74e 
   
 service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpCLIService.java 
 cb01cfd 
   service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java 
 255a165 
   shims/0.20/src/main/java/org/apache/hadoop/hive/shims/Hadoop20Shims.java 
 9aa555a 
   
 shims/common-secure/src/main/java/org/apache/hadoop/hive/shims/HadoopShimsSecure.java
  d4cddda 
   shims/common/src/main/java/org/apache/hadoop/hive/shims/HadoopShims.java 
 ed951f1 
 
 Diff: https://reviews.apache.org/r/19599/diff/
 
 
 Testing
 ---
 
 ## Verification of enhancement with Beeline/JDBC 
 
 ### Verified the following calls succeeded getting connection, and listig 
 tables, 
 when valid spnego.principal and spengo.keytab are specified in hive-site.xml,
 and the client has KINITed and has a valid kerberos ticket in cache
 
 
 !connect 
 jdbc:hive2://hdps.example.com:10001/default;principal=hive/hdps.example@example.com?hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice
   dummy dummy-pass org.apache.hive.jdbc.HiveDriver 
 
 
 !connect 
 jdbc:hive2://hdps.example.com:10001/default;principal=HTTP/hdps.example@example.com?hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice
   dummy dummy-pass org.apache.hive.jdbc.HiveDriver 
 
 ### Verified the following call succeeded getting connection, and listig 
 tables, 
 even if valid spnego.principal or valid spengo.keytab is not  specified in 
 hive-site.xml,
 as long as valid hive server2 kerberos principal and keytab are specified in 
 hive-site.xml,
 and the client has KINITed and has a valid kerberos ticket in cache
 
 !connect 
 jdbc:hive2://hdps.example.com:10001/default;principal=hive/hdps.example@example.com?hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice
   dummy dummy-pass org.apache.hive.jdbc.HiveDriver 
 
 ### Verified the following call failed  getting connection, 
 when valid  spnego.principal or valid spengo.keytab is not specified in 
 hive-site.xml
 
 !connect 
 jdbc:hive2://hdps.example.com:10001/default;principal=HTTP/hdps.example@example.com?hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice
   dummy dummy-pass org.apache.hive.jdbc.HiveDriver 
 
 ## Verification of enhancement with Apache Knox
 
 Apache Knox was able to authenticate to hive server 2 as SPNego client using 
 Apache HttpClient,
 and list tables, when correct spnego.principal and spengo.keytab are 
 specified in hive-site.xml
 
 Apache Knox was not able to authenticate to hive server 2 as SPNego client 
 using Apache HttpClient,
 when valid spnego.principal or spengo.keytab is not specified in hive-site.xml
 
 ## Verification of enhancement with curl
 
 ### when valid spnego.principal and spengo.keytab are specified in 
 hive-site.xml
 and the client has KINITed and has a valid kerberos ticket in cache
 
 curl -i --negotiate -u : http://hdps.example.com:10001/cliservice
 
 SPNego authentication succeeded and got a HTTP status code 500,
 since we did not end Thrift body content
 
 ### when valid spnego.principal and spengo.keytab are specified in 
 hive-site.xml
 and the client has not KINITed and does not have a  valid kerberos ticket in 
 cache
 
 curl -i --negotiate -u : http://hdps.example.com:10001/cliservice
 
 url -i --negotiate -u : http://hdps.example.com:10001/cliservice
 HTTP/1.1 401 Unauthorized
 WWW-Authenticate: Negotiate
 Content-Type: application/x-thrift;charset=ISO-8859-1
 Content-Length: 69
 Server: Jetty(7.6.0.v20120127)
 
 Authentication Error: java.lang.reflect.UndeclaredThrowableException
 
 
 Thanks,
 
 dilli dorai
 




[jira] [Updated] (HIVE-6738) HiveServer2 secure Thrift/HTTP needs to accept doAs parameter from proxying intermediary

2014-03-27 Thread Dilli Arumugam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dilli Arumugam updated HIVE-6738:
-

Attachment: hive-6738-req-impl-verify-rev1.md

description revised based on review input from [~vaibhavgumashta]

 HiveServer2 secure Thrift/HTTP needs to accept doAs parameter from proxying 
 intermediary
 

 Key: HIVE-6738
 URL: https://issues.apache.org/jira/browse/HIVE-6738
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Reporter: Dilli Arumugam
Assignee: Dilli Arumugam
 Attachments: HIVE-6738.patch, hive-6738-req-impl-verify-rev1.md, 
 hive-6738-req-impl-verify.md


 See already implemented JIra
  https://issues.apache.org/jira/browse/HIVE-5155
 Support secure proxy user access to HiveServer2
 That fix expects the hive.server2.proxy.user parameter to come in Thrift body.
 When an intermediary gateway like Apache Knox is authenticating the end 
 client and then proxying the request to HiveServer2,  it is not practical for 
 the intermediary like Apache Knox to modify thrift content.
 Intermediary like Apache Knox should be able to assert doAs in a query 
 parameter. This paradigm is already established by other Hadoop ecosystem 
 components like WebHDFS, WebHCat, Oozie and HBase and Hive needs to be 
 aligned with them.
 The doAs asserted in query parameter should override if doAs specified in 
 Thrift body.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6129) alter exchange is implemented in inverted manner

2014-03-27 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-6129:


Fix Version/s: 0.13.0

 alter exchange is implemented in inverted manner
 

 Key: HIVE-6129
 URL: https://issues.apache.org/jira/browse/HIVE-6129
 Project: Hive
  Issue Type: Bug
Reporter: Navis
Assignee: Navis
Priority: Critical
 Fix For: 0.13.0

 Attachments: HIVE-6129.1.patch.txt, HIVE-6129.2.patch


 see 
 https://issues.apache.org/jira/browse/HIVE-4095?focusedCommentId=13819885page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13819885
 alter exchange should be implemented accord to document in 
 https://cwiki.apache.org/confluence/display/Hive/Exchange+Partition. i.e 
 {code}
 alter table T1 exchange partition (ds='1') with table T2 
 {code}
 should be (after creating T1@ds=1) 
 {quote}
 moves the data from T2 to T1@ds=1 
 {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6129) alter exchange is implemented in inverted manner

2014-03-27 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-6129:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

committed to trunk and 0.13

 alter exchange is implemented in inverted manner
 

 Key: HIVE-6129
 URL: https://issues.apache.org/jira/browse/HIVE-6129
 Project: Hive
  Issue Type: Bug
Reporter: Navis
Assignee: Navis
Priority: Critical
 Attachments: HIVE-6129.1.patch.txt, HIVE-6129.2.patch


 see 
 https://issues.apache.org/jira/browse/HIVE-4095?focusedCommentId=13819885page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13819885
 alter exchange should be implemented accord to document in 
 https://cwiki.apache.org/confluence/display/Hive/Exchange+Partition. i.e 
 {code}
 alter table T1 exchange partition (ds='1') with table T2 
 {code}
 should be (after creating T1@ds=1) 
 {quote}
 moves the data from T2 to T1@ds=1 
 {quote}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-2752) Index names are case sensitive

2014-03-27 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-2752:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk and 0.13

 Index names are case sensitive
 --

 Key: HIVE-2752
 URL: https://issues.apache.org/jira/browse/HIVE-2752
 Project: Hive
  Issue Type: Bug
  Components: Indexing, Metastore, Query Processor
Affects Versions: 0.9.0
Reporter: Philip Tromans
Assignee: Navis
Priority: Minor
 Attachments: HIVE-2752.1.patch.txt

   Original Estimate: 4h
  Remaining Estimate: 4h

 The following script:
 DROP TABLE IF EXISTS TestTable;
 CREATE TABLE TestTable (a INT);
 DROP INDEX IF EXISTS TestTableA_IDX ON TestTable;
 CREATE INDEX TestTableA_IDX ON TABLE TestTable (a) AS 
 'org.apache.hadoop.hive.ql.index.compact.CompactIndexHandler' WITH DEFERRED 
 REBUILD;
 ALTER INDEX TestTableA_IDX ON TestTable REBUILD;
 results in the following exception:
 MetaException(message:index testtablea_idx doesn't exist)
   at 
 org.apache.hadoop.hive.metastore.ObjectStore.alterIndex(ObjectStore.java:1880)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$30.run(HiveMetaStore.java:1930)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$30.run(HiveMetaStore.java:1927)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:356)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_index(HiveMetaStore.java:1927)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_index(HiveMetaStoreClient.java:868)
   at org.apache.hadoop.hive.ql.metadata.Hive.alterIndex(Hive.java:398)
   at org.apache.hadoop.hive.ql.exec.DDLTask.alterIndex(DDLTask.java:902)
   at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:236)
   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:134)
   at 
 org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332)
   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:338)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:436)
   at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:446)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:642)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
 When you execute: SHOW INDEXES ON TestTable;, you get:
 TestTableA_IDXtesttable   a   
 default__testtable_testtablea_idx__ compact
 so it looks like things don't get lower cased when they go into the 
 metastore, but they do when the rebuild op is trying to execute.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-2752) Index names are case sensitive

2014-03-27 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-2752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-2752:


Fix Version/s: 0.13.0

 Index names are case sensitive
 --

 Key: HIVE-2752
 URL: https://issues.apache.org/jira/browse/HIVE-2752
 Project: Hive
  Issue Type: Bug
  Components: Indexing, Metastore, Query Processor
Affects Versions: 0.9.0
Reporter: Philip Tromans
Assignee: Navis
Priority: Minor
 Fix For: 0.13.0

 Attachments: HIVE-2752.1.patch.txt

   Original Estimate: 4h
  Remaining Estimate: 4h

 The following script:
 DROP TABLE IF EXISTS TestTable;
 CREATE TABLE TestTable (a INT);
 DROP INDEX IF EXISTS TestTableA_IDX ON TestTable;
 CREATE INDEX TestTableA_IDX ON TABLE TestTable (a) AS 
 'org.apache.hadoop.hive.ql.index.compact.CompactIndexHandler' WITH DEFERRED 
 REBUILD;
 ALTER INDEX TestTableA_IDX ON TestTable REBUILD;
 results in the following exception:
 MetaException(message:index testtablea_idx doesn't exist)
   at 
 org.apache.hadoop.hive.metastore.ObjectStore.alterIndex(ObjectStore.java:1880)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$30.run(HiveMetaStore.java:1930)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$30.run(HiveMetaStore.java:1927)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:356)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_index(HiveMetaStore.java:1927)
   at 
 org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_index(HiveMetaStoreClient.java:868)
   at org.apache.hadoop.hive.ql.metadata.Hive.alterIndex(Hive.java:398)
   at org.apache.hadoop.hive.ql.exec.DDLTask.alterIndex(DDLTask.java:902)
   at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:236)
   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:134)
   at 
 org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1332)
   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1123)
   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:931)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:255)
   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:212)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:338)
   at 
 org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:436)
   at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:446)
   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:642)
   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:554)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
 When you execute: SHOW INDEXES ON TestTable;, you get:
 TestTableA_IDXtesttable   a   
 default__testtable_testtablea_idx__ compact
 so it looks like things don't get lower cased when they go into the 
 metastore, but they do when the rebuild op is trying to execute.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6200) Hive custom SerDe cannot load DLL added by ADD FILE command

2014-03-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949459#comment-13949459
 ] 

Hive QA commented on HIVE-6200:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12636997/HIVE-6200.3.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5491 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_infer_bucket_sort_dyn_part
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1983/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1983/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12636997

 Hive custom SerDe cannot load DLL added by ADD FILE command
 -

 Key: HIVE-6200
 URL: https://issues.apache.org/jira/browse/HIVE-6200
 Project: Hive
  Issue Type: Bug
Reporter: Shuaishuai Nie
Assignee: Shuaishuai Nie
 Attachments: HIVE-6200.1.patch, HIVE-6200.2.patch, HIVE-6200.3.patch


 When custom SerDe need to load a DLL file added using ADD FILE command in 
 HIVE, the loading fail with exception like 
 java.lang.UnsatisfiedLinkError:C:\tmp\admin2_6996@headnode0_201401100431_resources\hello.dll:
  Access is denied. 
 The reason is when FileSystem creating local copy of the file, the permission 
 of local file is set to default as 666. DLL file need execute permission 
 to be loaded successfully.
 Similar scenario also happens when Hadoop localize files in distributed 
 cache. The solution in Hadoop is to add execute permission to the file 
 after localizationl.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6314) The logging (progress reporting) is too verbose

2014-03-27 Thread Harish Butani (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949452#comment-13949452
 ] 

Harish Butani commented on HIVE-6314:
-

+1

 The logging (progress reporting) is too verbose
 ---

 Key: HIVE-6314
 URL: https://issues.apache.org/jira/browse/HIVE-6314
 Project: Hive
  Issue Type: Bug
Reporter: Sam
Assignee: Navis
  Labels: logger
 Attachments: HIVE-6314.1.patch.txt, HIVE-6314.2.patch


 The progress report is issued every second even when no progress have been 
 made:
 {code}
 2014-01-27 10:35:55,209 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 6.68 
 sec
 2014-01-27 10:35:56,678 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 6.68 
 sec
 2014-01-27 10:35:59,344 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 6.68 
 sec
 2014-01-27 10:36:01,268 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 
 8.67 sec
 2014-01-27 10:36:03,149 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 
 8.67 sec
 {code}
 This pollutes the logs and the screen, and people do not appreciate it as 
 much as the designers might have thought 
 ([http://stackoverflow.com/questions/20849289/how-do-i-limit-log-verbosity-of-hive],
  
 [http://stackoverflow.com/questions/14121543/controlling-the-level-of-verbosity-in-hive]).
 It would be nice to be able to control the level of verbosity (but *not* by 
 the {{-v}} switch!):
 # Make sure that the progress report is only issued where there is something 
 new to report; or
 # Remove all the progress messages; or
 # Make sure that progress is reported only every X sec (instead of every 1 
 second)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6200) Hive custom SerDe cannot load DLL added by ADD FILE command

2014-03-27 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6200:
---

   Resolution: Fixed
Fix Version/s: 0.14.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Shuaishuai!

 Hive custom SerDe cannot load DLL added by ADD FILE command
 -

 Key: HIVE-6200
 URL: https://issues.apache.org/jira/browse/HIVE-6200
 Project: Hive
  Issue Type: Bug
Reporter: Shuaishuai Nie
Assignee: Shuaishuai Nie
 Fix For: 0.14.0

 Attachments: HIVE-6200.1.patch, HIVE-6200.2.patch, HIVE-6200.3.patch


 When custom SerDe need to load a DLL file added using ADD FILE command in 
 HIVE, the loading fail with exception like 
 java.lang.UnsatisfiedLinkError:C:\tmp\admin2_6996@headnode0_201401100431_resources\hello.dll:
  Access is denied. 
 The reason is when FileSystem creating local copy of the file, the permission 
 of local file is set to default as 666. DLL file need execute permission 
 to be loaded successfully.
 Similar scenario also happens when Hadoop localize files in distributed 
 cache. The solution in Hadoop is to add execute permission to the file 
 after localizationl.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6759) Fix reading partial ORC files while they are being written

2014-03-27 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HIVE-6759:


Attachment: HIVE-6759.patch

This patch fixes the problem by using the supplied length rather that the stat 
from the NameNode.

 Fix reading partial ORC files while they are being written
 --

 Key: HIVE-6759
 URL: https://issues.apache.org/jira/browse/HIVE-6759
 Project: Hive
  Issue Type: Sub-task
Reporter: Owen O'Malley
 Attachments: HIVE-6759.patch


 HDFS with the hflush ensures the bytes are visible, but doesn't update the 
 file length on the NameNode. Currently the Orc reader will only read up to 
 the length on the NameNode. If the user specified a length from a 
 flush_length file, the Orc reader should trust it to be right.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6759) Fix reading partial ORC files while they are being written

2014-03-27 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HIVE-6759:


Assignee: Owen O'Malley
  Status: Patch Available  (was: Open)

 Fix reading partial ORC files while they are being written
 --

 Key: HIVE-6759
 URL: https://issues.apache.org/jira/browse/HIVE-6759
 Project: Hive
  Issue Type: Sub-task
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Attachments: HIVE-6759.patch


 HDFS with the hflush ensures the bytes are visible, but doesn't update the 
 file length on the NameNode. Currently the Orc reader will only read up to 
 the length on the NameNode. If the user specified a length from a 
 flush_length file, the Orc reader should trust it to be right.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6546) WebHCat job submission for pig with -useHCatalog argument fails on Windows

2014-03-27 Thread Eric Hanson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Hanson updated HIVE-6546:
--

Affects Version/s: 0.14.0

 WebHCat job submission for pig with -useHCatalog argument fails on Windows
 --

 Key: HIVE-6546
 URL: https://issues.apache.org/jira/browse/HIVE-6546
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Affects Versions: 0.11.0, 0.12.0, 0.13.0, 0.14.0
 Environment: HDInsight deploying HDP 1.3:  
 c:\apps\dist\pig-0.11.0.1.3.2.0-05
 Also on Windows HDP 1.3 one-box configuration.
Reporter: Eric Hanson
Assignee: Eric Hanson
 Fix For: 0.14.0

 Attachments: HIVE-6546.01.patch, HIVE-6546.02.patch, 
 HIVE-6546.03.patch, HIVE-6546.03.patch, HIVE-6546.03.patch


 On a one-box windows setup, do the following from a powershell prompt:
 cmd /c curl.exe -s `
   -d user.name=hadoop `
   -d arg=-useHCatalog `
   -d execute=emp = load '/data/emp/emp_0.dat'; dump emp; `
   -d statusdir=/tmp/webhcat.output01 `
   'http://localhost:50111/templeton/v1/pig' -v
 The job fails with error code 7, but it should run. 
 I traced this down to the following. In the job configuration for the 
 TempletonJobController, we have templeton.args set to
 cmd,/c,call,C:\\hadooppig-0.11.0.1.3.0.0-0846/bin/pig.cmd,-D__WEBHCAT_TOKEN_FILE_LOCATION__=-useHCatalog,-execute,emp
  = load '/data/emp/emp_0.dat'; dump emp;
 Notice the = sign before -useHCatalog. I think this should be a comma.
 The bad string D__WEBHCAT_TOKEN_FILE_LOCATION__=-useHCatalog gets created 
 in  org.apache.hadoop.util.GenericOptionsParser.preProcessForWindows().
 It happens at line 434:
 {code}
   } else {
   if (i  args.length - 1) {
 prop += = + args[++i];   // RIGHT HERE! at iterations i = 37, 38
   }
 }
 {code}
 Bug is here:
 {code}
   if (prop != null) {
 if (prop.contains(=)) {  // -D__WEBHCAT_TOKEN_FILE_LOCATION__ does 
 not contain equal, so else branch is run and appends =-useHCatalog,
   // everything good
 } else {
   if (i  args.length - 1) {
 prop += = + args[++i];
   }
 }
 newArgs.add(prop);
   }
 {code}
 One possible fix is to change the string constant 
 org.apache.hcatalog.templeton.tool.TempletonControllerJob.TOKEN_FILE_ARG_PLACEHOLDER
  to have an = sign in it. Or, preProcessForWindows() itself could be 
 changed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6757) Remove deprecated parquet classes from outside of org.apache package

2014-03-27 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949543#comment-13949543
 ] 

Owen O'Malley commented on HIVE-6757:
-

From Hive's point of view, they are unused classes that have never been 
released. The right time to remove them is now before 0.13 is released.

 Remove deprecated parquet classes from outside of org.apache package
 

 Key: HIVE-6757
 URL: https://issues.apache.org/jira/browse/HIVE-6757
 Project: Hive
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 0.13.0


 Apache shouldn't release projects with files outside of the org.apache 
 namespace.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6757) Remove deprecated parquet classes from outside of org.apache package

2014-03-27 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HIVE-6757:


Attachment: HIVE-6757.patch

Just delete the entire directory. No other code change is necessary.

 Remove deprecated parquet classes from outside of org.apache package
 

 Key: HIVE-6757
 URL: https://issues.apache.org/jira/browse/HIVE-6757
 Project: Hive
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 0.13.0

 Attachments: HIVE-6757.patch


 Apache shouldn't release projects with files outside of the org.apache 
 namespace.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6757) Remove deprecated parquet classes from outside of org.apache package

2014-03-27 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HIVE-6757:


Status: Patch Available  (was: Open)

 Remove deprecated parquet classes from outside of org.apache package
 

 Key: HIVE-6757
 URL: https://issues.apache.org/jira/browse/HIVE-6757
 Project: Hive
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
 Fix For: 0.13.0

 Attachments: HIVE-6757.patch


 Apache shouldn't release projects with files outside of the org.apache 
 namespace.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6757) Remove deprecated parquet classes from outside of org.apache package

2014-03-27 Thread Owen O'Malley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Owen O'Malley updated HIVE-6757:


Priority: Blocker  (was: Major)

 Remove deprecated parquet classes from outside of org.apache package
 

 Key: HIVE-6757
 URL: https://issues.apache.org/jira/browse/HIVE-6757
 Project: Hive
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
Priority: Blocker
 Fix For: 0.13.0

 Attachments: HIVE-6757.patch


 Apache shouldn't release projects with files outside of the org.apache 
 namespace.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6670) ClassNotFound with Serde

2014-03-27 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949554#comment-13949554
 ] 

Alan Gates commented on HIVE-6670:
--

Ran tests locally, all looks good.

 ClassNotFound with Serde
 

 Key: HIVE-6670
 URL: https://issues.apache.org/jira/browse/HIVE-6670
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Abin Shahab
Assignee: Abin Shahab
 Attachments: HIVE-6670-branch-0.12.patch, HIVE-6670.1.patch, 
 HIVE-6670.patch


 We are finding a ClassNotFound exception when we use 
 CSVSerde(https://github.com/ogrodnek/csv-serde) to create a table.
 This is happening because MapredLocalTask does not pass the local added jars 
 to ExecDriver when that is launched.
 ExecDriver's classpath does not include the added jars. Therefore, when the 
 plan is deserialized, it throws a ClassNotFoundException in the 
 deserialization code, and results in a TableDesc object with a Null 
 DeserializerClass.
 This results in an NPE during Fetch.
 Steps to reproduce:
 wget 
 https://drone.io/github.com/ogrodnek/csv-serde/files/target/csv-serde-1.1.2-0.11.0-all.jar
  into somewhere local eg. 
 /home/soam/HiveSerdeIssue/csv-serde-1.1.2-0.11.0-all.jar.
 Place some sample SCV files in HDFS as follows:
 hdfs dfs -mkdir /user/soam/HiveSerdeIssue/sampleCSV/
 hdfs dfs -put /home/soam/sampleCSV.csv /user/soam/HiveSerdeIssue/sampleCSV/
 hdfs dfs -mkdir /user/soam/HiveSerdeIssue/sampleJoinTarget/
 hdfs dfs -put /home/soam/sampleJoinTarget.csv 
 /user/soam/HiveSerdeIssue/sampleJoinTarget/
 
 create the tables in hive:
 ADD JAR /home/soam/HiveSerdeIssue/csv-serde-1.1.2-0.11.0-all.jar;
 create external table sampleCSV (md5hash string, filepath string)
 row format serde 'com.bizo.hive.serde.csv.CSVSerde'
 stored as textfile
 location '/user/soam/HiveSerdeIssue/sampleCSV/'
 ;
 create external table sampleJoinTarget (md5hash string, filepath string, 
 datestamp string, nblines string, nberrors string)
 ROW FORMAT DELIMITED 
 FIELDS TERMINATED BY ',' 
 LINES TERMINATED BY '\n'
 STORED AS TEXTFILE
 LOCATION '/user/soam/HiveSerdeIssue/sampleJoinTarget/'
 ;
 ===
 Now, try the following JOIN:
 ADD JAR /home/soam/HiveSerdeIssue/csv-serde-1.1.2-0.11.0-all.jar;
 SELECT 
 sampleCSV.md5hash, 
 sampleCSV.filepath 
 FROM sampleCSV
 JOIN sampleJoinTarget
 ON (sampleCSV.md5hash = sampleJoinTarget.md5hash) 
 ;
 —
 This will fail with the error:
 Execution log at: /tmp/soam/.log
 java.lang.ClassNotFoundException: com/bizo/hive/serde/csv/CSVSerde
 Continuing ...
 2014-03-11 10:35:03 Starting to launch local task to process map join; 
 maximum memory = 238551040
 Execution failed with exit status: 2
 Obtaining error information
 Task failed!
 Task ID:
 Stage-4
 Logs:
 /var/log/hive/soam/hive.log
 FAILED: Execution Error, return code 2 from 
 org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask
 Try the following LEFT JOIN. This will work:
 SELECT 
 sampleCSV.md5hash, 
 sampleCSV.filepath 
 FROM sampleCSV
 LEFT JOIN sampleJoinTarget
 ON (sampleCSV.md5hash = sampleJoinTarget.md5hash) 
 ;
 ==



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6757) Remove deprecated parquet classes from outside of org.apache package

2014-03-27 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949557#comment-13949557
 ] 

Brock Noland commented on HIVE-6757:


bq. From Hive's point of view, they are unused classes that have never been 
released. 

I disagree. These class names are stored by many hive users in the metastore.

 Remove deprecated parquet classes from outside of org.apache package
 

 Key: HIVE-6757
 URL: https://issues.apache.org/jira/browse/HIVE-6757
 Project: Hive
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
Priority: Blocker
 Fix For: 0.13.0

 Attachments: HIVE-6757.patch


 Apache shouldn't release projects with files outside of the org.apache 
 namespace.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6546) WebHCat job submission for pig with -useHCatalog argument fails on Windows

2014-03-27 Thread Eric Hanson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Hanson updated HIVE-6546:
--

Fix Version/s: (was: 0.13.0)
   0.14.0

 WebHCat job submission for pig with -useHCatalog argument fails on Windows
 --

 Key: HIVE-6546
 URL: https://issues.apache.org/jira/browse/HIVE-6546
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Affects Versions: 0.11.0, 0.12.0, 0.13.0, 0.14.0
 Environment: HDInsight deploying HDP 1.3:  
 c:\apps\dist\pig-0.11.0.1.3.2.0-05
 Also on Windows HDP 1.3 one-box configuration.
Reporter: Eric Hanson
Assignee: Eric Hanson
 Fix For: 0.14.0

 Attachments: HIVE-6546.01.patch, HIVE-6546.02.patch, 
 HIVE-6546.03.patch, HIVE-6546.03.patch, HIVE-6546.03.patch


 On a one-box windows setup, do the following from a powershell prompt:
 cmd /c curl.exe -s `
   -d user.name=hadoop `
   -d arg=-useHCatalog `
   -d execute=emp = load '/data/emp/emp_0.dat'; dump emp; `
   -d statusdir=/tmp/webhcat.output01 `
   'http://localhost:50111/templeton/v1/pig' -v
 The job fails with error code 7, but it should run. 
 I traced this down to the following. In the job configuration for the 
 TempletonJobController, we have templeton.args set to
 cmd,/c,call,C:\\hadooppig-0.11.0.1.3.0.0-0846/bin/pig.cmd,-D__WEBHCAT_TOKEN_FILE_LOCATION__=-useHCatalog,-execute,emp
  = load '/data/emp/emp_0.dat'; dump emp;
 Notice the = sign before -useHCatalog. I think this should be a comma.
 The bad string D__WEBHCAT_TOKEN_FILE_LOCATION__=-useHCatalog gets created 
 in  org.apache.hadoop.util.GenericOptionsParser.preProcessForWindows().
 It happens at line 434:
 {code}
   } else {
   if (i  args.length - 1) {
 prop += = + args[++i];   // RIGHT HERE! at iterations i = 37, 38
   }
 }
 {code}
 Bug is here:
 {code}
   if (prop != null) {
 if (prop.contains(=)) {  // -D__WEBHCAT_TOKEN_FILE_LOCATION__ does 
 not contain equal, so else branch is run and appends =-useHCatalog,
   // everything good
 } else {
   if (i  args.length - 1) {
 prop += = + args[++i];
   }
 }
 newArgs.add(prop);
   }
 {code}
 One possible fix is to change the string constant 
 org.apache.hcatalog.templeton.tool.TempletonControllerJob.TOKEN_FILE_ARG_PLACEHOLDER
  to have an = sign in it. Or, preProcessForWindows() itself could be 
 changed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6703) Tez should store SHA of the jar when uploading to cache

2014-03-27 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949594#comment-13949594
 ] 

Sergey Shelukhin commented on HIVE-6703:


will commit to trunk and 13 later today

 Tez should store SHA of the jar when uploading to cache
 ---

 Key: HIVE-6703
 URL: https://issues.apache.org/jira/browse/HIVE-6703
 Project: Hive
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.13.0

 Attachments: HIVE-6703.01.patch, HIVE-6703.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6701) Analyze table compute statistics for decimal columns.

2014-03-27 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949590#comment-13949590
 ] 

Sergey Shelukhin commented on HIVE-6701:


will commit to trunk and 13 later today

 Analyze table compute statistics for decimal columns.
 -

 Key: HIVE-6701
 URL: https://issues.apache.org/jira/browse/HIVE-6701
 Project: Hive
  Issue Type: Bug
Reporter: Jitendra Nath Pandey
Assignee: Sergey Shelukhin
 Fix For: 0.13.0

 Attachments: HIVE-6701.02.patch, HIVE-6701.1.patch


 Analyze table should compute statistics for decimal columns as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6734) DDL locking too course grained in new db txn manager

2014-03-27 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949636#comment-13949636
 ] 

Ashutosh Chauhan commented on HIVE-6734:


[~alangates] Can you create RB entry for it ?

 DDL locking too course grained in new db txn manager
 

 Key: HIVE-6734
 URL: https://issues.apache.org/jira/browse/HIVE-6734
 Project: Hive
  Issue Type: Bug
  Components: Locking
Affects Versions: 0.13.0
Reporter: Alan Gates
Assignee: Alan Gates
 Fix For: 0.13.0

 Attachments: HIVE-6734.patch


 All DDL operations currently acquire an exclusive lock.  This is too course 
 grained, as some operations like alter table add partition shouldn't get an 
 exclusive lock on the entire table.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6710) Deadlocks seen in transaction handler using mysql

2014-03-27 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949628#comment-13949628
 ] 

Ashutosh Chauhan commented on HIVE-6710:


[~alangates] Can you create RB entry for this ?

 Deadlocks seen in transaction handler using mysql
 -

 Key: HIVE-6710
 URL: https://issues.apache.org/jira/browse/HIVE-6710
 Project: Hive
  Issue Type: Bug
  Components: Locking
Affects Versions: 0.13.0
Reporter: Alan Gates
Assignee: Alan Gates
 Fix For: 0.13.0

 Attachments: HIVE-6710.patch


 When multiple clients attempt to obtain locks a deadlock on the mysql 
 database occasionally occurs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6757) Remove deprecated parquet classes from outside of org.apache package

2014-03-27 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949633#comment-13949633
 ] 

Owen O'Malley commented on HIVE-6757:
-

{quote}
I disagree. These class names are stored by many hive users in the metastore.
{quote}

Actually, these have never been released so there are *NO* Hive users yet. That 
is exactly why this needs to be fixed before 0.13.

 Remove deprecated parquet classes from outside of org.apache package
 

 Key: HIVE-6757
 URL: https://issues.apache.org/jira/browse/HIVE-6757
 Project: Hive
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
Priority: Blocker
 Fix For: 0.13.0

 Attachments: HIVE-6757.patch


 Apache shouldn't release projects with files outside of the org.apache 
 namespace.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6710) Deadlocks seen in transaction handler using mysql

2014-03-27 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949647#comment-13949647
 ] 

Alan Gates commented on HIVE-6710:
--

Review board entry created, https://reviews.apache.org/r/19735/

 Deadlocks seen in transaction handler using mysql
 -

 Key: HIVE-6710
 URL: https://issues.apache.org/jira/browse/HIVE-6710
 Project: Hive
  Issue Type: Bug
  Components: Locking
Affects Versions: 0.13.0
Reporter: Alan Gates
Assignee: Alan Gates
 Fix For: 0.13.0

 Attachments: HIVE-6710.patch


 When multiple clients attempt to obtain locks a deadlock on the mysql 
 database occasionally occurs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6757) Remove deprecated parquet classes from outside of org.apache package

2014-03-27 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949648#comment-13949648
 ] 

Brock Noland commented on HIVE-6757:


bq. Actually, these have never been released so there are NO Hive users yet.

This is not true. Many Hive users used the Parquet Serde before it was 
contributed to the Hive project. Those Hive users are extremely interested in 
having their existing tables work when they go to 0.13.

 Remove deprecated parquet classes from outside of org.apache package
 

 Key: HIVE-6757
 URL: https://issues.apache.org/jira/browse/HIVE-6757
 Project: Hive
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
Priority: Blocker
 Fix For: 0.13.0

 Attachments: HIVE-6757.patch


 Apache shouldn't release projects with files outside of the org.apache 
 namespace.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6710) Deadlocks seen in transaction handler using mysql

2014-03-27 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949649#comment-13949649
 ] 

Alan Gates commented on HIVE-6710:
--

It should be noted that this looks like I rewrote large sections of the code, 
but I did not.  Each public method was wrapped in a try/catch block to handle 
deadlocks and retry.

 Deadlocks seen in transaction handler using mysql
 -

 Key: HIVE-6710
 URL: https://issues.apache.org/jira/browse/HIVE-6710
 Project: Hive
  Issue Type: Bug
  Components: Locking
Affects Versions: 0.13.0
Reporter: Alan Gates
Assignee: Alan Gates
 Fix For: 0.13.0

 Attachments: HIVE-6710.patch


 When multiple clients attempt to obtain locks a deadlock on the mysql 
 database occasionally occurs.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6734) DDL locking too course grained in new db txn manager

2014-03-27 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949658#comment-13949658
 ] 

Alan Gates commented on HIVE-6734:
--

I changed the grain of locking for DDL statements.  In the initial checkin I 
had set all DDL statements to exclusive lock.  But this results in a whole 
table being locked so that a partition can be added.  This is excessive.  So I 
went back and added a DDL_EXCLUSIVE, DDL_SHARED, and DDL_NO_LOCK so that 
different DDL statements could obtain appropriate grains of locks.  For 
example, add partition is now a shared lock, drop partition is an exclusive 
lock, and add function is no lock.

 DDL locking too course grained in new db txn manager
 

 Key: HIVE-6734
 URL: https://issues.apache.org/jira/browse/HIVE-6734
 Project: Hive
  Issue Type: Bug
  Components: Locking
Affects Versions: 0.13.0
Reporter: Alan Gates
Assignee: Alan Gates
 Fix For: 0.13.0

 Attachments: HIVE-6734.patch


 All DDL operations currently acquire an exclusive lock.  This is too course 
 grained, as some operations like alter table add partition shouldn't get an 
 exclusive lock on the entire table.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6734) DDL locking too course grained in new db txn manager

2014-03-27 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949659#comment-13949659
 ] 

Alan Gates commented on HIVE-6734:
--

Created review board https://reviews.apache.org/r/19736/

 DDL locking too course grained in new db txn manager
 

 Key: HIVE-6734
 URL: https://issues.apache.org/jira/browse/HIVE-6734
 Project: Hive
  Issue Type: Bug
  Components: Locking
Affects Versions: 0.13.0
Reporter: Alan Gates
Assignee: Alan Gates
 Fix For: 0.13.0

 Attachments: HIVE-6734.patch


 All DDL operations currently acquire an exclusive lock.  This is too course 
 grained, as some operations like alter table add partition shouldn't get an 
 exclusive lock on the entire table.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6546) WebHCat job submission for pig with -useHCatalog argument fails on Windows

2014-03-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949670#comment-13949670
 ] 

Hive QA commented on HIVE-6546:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12637024/HIVE-6546.03.patch

{color:green}SUCCESS:{color} +1 5491 tests passed

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1985/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1985/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12637024

 WebHCat job submission for pig with -useHCatalog argument fails on Windows
 --

 Key: HIVE-6546
 URL: https://issues.apache.org/jira/browse/HIVE-6546
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Affects Versions: 0.11.0, 0.12.0, 0.13.0, 0.14.0
 Environment: HDInsight deploying HDP 1.3:  
 c:\apps\dist\pig-0.11.0.1.3.2.0-05
 Also on Windows HDP 1.3 one-box configuration.
Reporter: Eric Hanson
Assignee: Eric Hanson
 Fix For: 0.14.0

 Attachments: HIVE-6546.01.patch, HIVE-6546.02.patch, 
 HIVE-6546.03.patch, HIVE-6546.03.patch, HIVE-6546.03.patch


 On a one-box windows setup, do the following from a powershell prompt:
 cmd /c curl.exe -s `
   -d user.name=hadoop `
   -d arg=-useHCatalog `
   -d execute=emp = load '/data/emp/emp_0.dat'; dump emp; `
   -d statusdir=/tmp/webhcat.output01 `
   'http://localhost:50111/templeton/v1/pig' -v
 The job fails with error code 7, but it should run. 
 I traced this down to the following. In the job configuration for the 
 TempletonJobController, we have templeton.args set to
 cmd,/c,call,C:\\hadooppig-0.11.0.1.3.0.0-0846/bin/pig.cmd,-D__WEBHCAT_TOKEN_FILE_LOCATION__=-useHCatalog,-execute,emp
  = load '/data/emp/emp_0.dat'; dump emp;
 Notice the = sign before -useHCatalog. I think this should be a comma.
 The bad string D__WEBHCAT_TOKEN_FILE_LOCATION__=-useHCatalog gets created 
 in  org.apache.hadoop.util.GenericOptionsParser.preProcessForWindows().
 It happens at line 434:
 {code}
   } else {
   if (i  args.length - 1) {
 prop += = + args[++i];   // RIGHT HERE! at iterations i = 37, 38
   }
 }
 {code}
 Bug is here:
 {code}
   if (prop != null) {
 if (prop.contains(=)) {  // -D__WEBHCAT_TOKEN_FILE_LOCATION__ does 
 not contain equal, so else branch is run and appends =-useHCatalog,
   // everything good
 } else {
   if (i  args.length - 1) {
 prop += = + args[++i];
   }
 }
 newArgs.add(prop);
   }
 {code}
 One possible fix is to change the string constant 
 org.apache.hcatalog.templeton.tool.TempletonControllerJob.TOKEN_FILE_ARG_PLACEHOLDER
  to have an = sign in it. Or, preProcessForWindows() itself could be 
 changed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6757) Remove deprecated parquet classes from outside of org.apache package

2014-03-27 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949686#comment-13949686
 ] 

Owen O'Malley commented on HIVE-6757:
-

{quote}
 Many Hive users used the Parquet Serde before it was contributed to the Hive 
project. 
{quote}
They can continue to use their jar and it will continue to work. That isn't 
motivation for putting these jars into Hive.

 Remove deprecated parquet classes from outside of org.apache package
 

 Key: HIVE-6757
 URL: https://issues.apache.org/jira/browse/HIVE-6757
 Project: Hive
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
Priority: Blocker
 Fix For: 0.13.0

 Attachments: HIVE-6757.patch


 Apache shouldn't release projects with files outside of the org.apache 
 namespace.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 19599: HiveServer2 secure thrift/http authentication needs to support SPNego

2014-03-27 Thread dilli dorai

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19599/
---

(Updated March 27, 2014, 5:52 p.m.)


Review request for hive, Ashutosh Chauhan, Thejas Nair, and Vaibhav Gumashta.


Changes
---

Patch that is rebased with recent repo.


Bugs: HIVE-6697
https://issues.apache.org/jira/browse/HIVE-6697


Repository: hive-git


Description
---

See JIra for description
https://issues.apache.org/jira/browse/HIVE-6697


Diffs (updated)
-

  common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 551639f 
  conf/hive-default.xml.template 3c3df43 
  service/src/java/org/apache/hive/service/auth/HiveAuthFactory.java 86d2009 
  service/src/java/org/apache/hive/service/cli/CLIService.java e31a74e 
  service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpCLIService.java 
f4cbe91 
  service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java 
255a165 
  shims/0.20/src/main/java/org/apache/hadoop/hive/shims/Hadoop20Shims.java 
80247ec 
  
shims/common-secure/src/main/java/org/apache/hadoop/hive/shims/HadoopShimsSecure.java
 d4cddda 
  shims/common/src/main/java/org/apache/hadoop/hive/shims/HadoopShims.java 
90c5602 

Diff: https://reviews.apache.org/r/19599/diff/


Testing
---

## Verification of enhancement with Beeline/JDBC 

### Verified the following calls succeeded getting connection, and listig 
tables, 
when valid spnego.principal and spengo.keytab are specified in hive-site.xml,
and the client has KINITed and has a valid kerberos ticket in cache


!connect 
jdbc:hive2://hdps.example.com:10001/default;principal=hive/hdps.example@example.com?hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice
  dummy dummy-pass org.apache.hive.jdbc.HiveDriver 


!connect 
jdbc:hive2://hdps.example.com:10001/default;principal=HTTP/hdps.example@example.com?hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice
  dummy dummy-pass org.apache.hive.jdbc.HiveDriver 

### Verified the following call succeeded getting connection, and listig 
tables, 
even if valid spnego.principal or valid spengo.keytab is not  specified in 
hive-site.xml,
as long as valid hive server2 kerberos principal and keytab are specified in 
hive-site.xml,
and the client has KINITed and has a valid kerberos ticket in cache

!connect 
jdbc:hive2://hdps.example.com:10001/default;principal=hive/hdps.example@example.com?hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice
  dummy dummy-pass org.apache.hive.jdbc.HiveDriver 

### Verified the following call failed  getting connection, 
when valid  spnego.principal or valid spengo.keytab is not specified in 
hive-site.xml

!connect 
jdbc:hive2://hdps.example.com:10001/default;principal=HTTP/hdps.example@example.com?hive.server2.transport.mode=http;hive.server2.thrift.http.path=cliservice
  dummy dummy-pass org.apache.hive.jdbc.HiveDriver 

## Verification of enhancement with Apache Knox

Apache Knox was able to authenticate to hive server 2 as SPNego client using 
Apache HttpClient,
and list tables, when correct spnego.principal and spengo.keytab are specified 
in hive-site.xml

Apache Knox was not able to authenticate to hive server 2 as SPNego client 
using Apache HttpClient,
when valid spnego.principal or spengo.keytab is not specified in hive-site.xml

## Verification of enhancement with curl

### when valid spnego.principal and spengo.keytab are specified in hive-site.xml
and the client has KINITed and has a valid kerberos ticket in cache

curl -i --negotiate -u : http://hdps.example.com:10001/cliservice

SPNego authentication succeeded and got a HTTP status code 500,
since we did not end Thrift body content

### when valid spnego.principal and spengo.keytab are specified in hive-site.xml
and the client has not KINITed and does not have a  valid kerberos ticket in 
cache

curl -i --negotiate -u : http://hdps.example.com:10001/cliservice

url -i --negotiate -u : http://hdps.example.com:10001/cliservice
HTTP/1.1 401 Unauthorized
WWW-Authenticate: Negotiate
Content-Type: application/x-thrift;charset=ISO-8859-1
Content-Length: 69
Server: Jetty(7.6.0.v20120127)

Authentication Error: java.lang.reflect.UndeclaredThrowableException


Thanks,

dilli dorai



[jira] [Commented] (HIVE-6546) WebHCat job submission for pig with -useHCatalog argument fails on Windows

2014-03-27 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949702#comment-13949702
 ] 

Thejas M Nair commented on HIVE-6546:
-

[~rhbutani] This is a small change, but it will be very useful to have it in 
0.13, as it affects the working on windows.


 WebHCat job submission for pig with -useHCatalog argument fails on Windows
 --

 Key: HIVE-6546
 URL: https://issues.apache.org/jira/browse/HIVE-6546
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Affects Versions: 0.11.0, 0.12.0, 0.13.0, 0.14.0
 Environment: HDInsight deploying HDP 1.3:  
 c:\apps\dist\pig-0.11.0.1.3.2.0-05
 Also on Windows HDP 1.3 one-box configuration.
Reporter: Eric Hanson
Assignee: Eric Hanson
 Fix For: 0.14.0

 Attachments: HIVE-6546.01.patch, HIVE-6546.02.patch, 
 HIVE-6546.03.patch, HIVE-6546.03.patch, HIVE-6546.03.patch


 On a one-box windows setup, do the following from a powershell prompt:
 cmd /c curl.exe -s `
   -d user.name=hadoop `
   -d arg=-useHCatalog `
   -d execute=emp = load '/data/emp/emp_0.dat'; dump emp; `
   -d statusdir=/tmp/webhcat.output01 `
   'http://localhost:50111/templeton/v1/pig' -v
 The job fails with error code 7, but it should run. 
 I traced this down to the following. In the job configuration for the 
 TempletonJobController, we have templeton.args set to
 cmd,/c,call,C:\\hadooppig-0.11.0.1.3.0.0-0846/bin/pig.cmd,-D__WEBHCAT_TOKEN_FILE_LOCATION__=-useHCatalog,-execute,emp
  = load '/data/emp/emp_0.dat'; dump emp;
 Notice the = sign before -useHCatalog. I think this should be a comma.
 The bad string D__WEBHCAT_TOKEN_FILE_LOCATION__=-useHCatalog gets created 
 in  org.apache.hadoop.util.GenericOptionsParser.preProcessForWindows().
 It happens at line 434:
 {code}
   } else {
   if (i  args.length - 1) {
 prop += = + args[++i];   // RIGHT HERE! at iterations i = 37, 38
   }
 }
 {code}
 Bug is here:
 {code}
   if (prop != null) {
 if (prop.contains(=)) {  // -D__WEBHCAT_TOKEN_FILE_LOCATION__ does 
 not contain equal, so else branch is run and appends =-useHCatalog,
   // everything good
 } else {
   if (i  args.length - 1) {
 prop += = + args[++i];
   }
 }
 newArgs.add(prop);
   }
 {code}
 One possible fix is to change the string constant 
 org.apache.hcatalog.templeton.tool.TempletonControllerJob.TOKEN_FILE_ARG_PLACEHOLDER
  to have an = sign in it. Or, preProcessForWindows() itself could be 
 changed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6757) Remove deprecated parquet classes from outside of org.apache package

2014-03-27 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949709#comment-13949709
 ] 

Brock Noland commented on HIVE-6757:


bq. They can continue to use their jar and it will continue to work. That isn't 
motivation for putting these jars into Hive.

They cannot continue to use their jars because many of the Hive interfaces 
changed in 0.12 and 0.13. This was one of the reasons that the Parquet 
developers agreed to contribute their work to Hive. I am quite surprised you 
marked this as a blocker considering:

* There is no apache or hive policy against this code
* This work was done a long time ago
* You are watching the JIRA in which this work was completed
* It's a tiny amount of code (all wrappers), impacting no one

I do not agree with removing this code for the 0.13 release.

 Remove deprecated parquet classes from outside of org.apache package
 

 Key: HIVE-6757
 URL: https://issues.apache.org/jira/browse/HIVE-6757
 Project: Hive
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
Priority: Blocker
 Fix For: 0.13.0

 Attachments: HIVE-6757.patch


 Apache shouldn't release projects with files outside of the org.apache 
 namespace.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6694) Beeline should provide a way to execute shell command as Hive CLI does

2014-03-27 Thread Szehon Ho (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949752#comment-13949752
 ] 

Szehon Ho commented on HIVE-6694:
-

Nice, it looks good.  One minor suggestion is to remove the commented code.  

That brings up a question, Hive CLI does variable substitution, but here it is 
commented (not sure intentionally?).  Why the difference, is it to prevent 
collision with shell variables?

 Beeline should provide a way to execute shell command as Hive CLI does
 --

 Key: HIVE-6694
 URL: https://issues.apache.org/jira/browse/HIVE-6694
 Project: Hive
  Issue Type: Improvement
  Components: CLI, Clients
Affects Versions: 0.11.0, 0.12.0, 0.13.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Attachments: HIVE-6694.patch


 Hive CLI allows a user to execute a shell command using ! notation. For 
 instance, !cat myfile.txt. Being able to execute shell command may be 
 important for some users. As a replacement, however, Beeline provides no such 
 capability, possibly because ! notation is reserved for SQLLine commands. 
 It's possible to provide this using a slightly syntactic variation such as 
 !sh cat myfilie.txt.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6763) HiveServer2 in http mode might send same kerberos client ticket in case of concurrent requests resulting in server throwing a replay exception

2014-03-27 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949755#comment-13949755
 ] 

Thejas M Nair commented on HIVE-6763:
-

[~vaibhavgumashta] Can you create a reviewboard link ? The indentation seems to 
have some issues.
It is recommended practice to put the unlock in a finally block. Alternatively, 
should we just use a simpler synchronized block instead of ReentrantLock ? 

 HiveServer2 in http mode might send same kerberos client ticket in case of 
 concurrent requests resulting in server throwing a replay exception
 --

 Key: HIVE-6763
 URL: https://issues.apache.org/jira/browse/HIVE-6763
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.13.0
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.13.0

 Attachments: HIVE-6763.1.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 19718: Vectorized Between and IN expressions don't work with decimal, date types.

2014-03-27 Thread Eric Hanson

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19718/#review38752
---


Looks good overall. Only minor comments.


ql/src/gen/vectorization/ExpressionTemplates/FilterDecimalColumnBetween.txt
https://reviews.apache.org/r/19718/#comment71027

please remove all trailing whitespace in this file



ql/src/gen/vectorization/ExpressionTemplates/FilterDecimalColumnBetween.txt
https://reviews.apache.org/r/19718/#comment71034

add blank after //



ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizationContext.java
https://reviews.apache.org/r/19718/#comment71038

Couldn't determine common type ...

sounds better



ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/DecimalColumnInList.java
https://reviews.apache.org/r/19718/#comment71053

Change comment. This is not a filter, it is a Boolean-valued expression.



ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/DecimalColumnInList.java
https://reviews.apache.org/r/19718/#comment71052

Remove the comment about This is optimized for lookup of the data type of 
the column. 

because that doesn't apply here since you're using the standard HashSet.

But it is still pretty good :-)



ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/DecimalColumnInList.java
https://reviews.apache.org/r/19718/#comment71057

formatting: j=0 == j = 0




ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/DecimalColumnInList.java
https://reviews.apache.org/r/19718/#comment71059

add blanks line before comment and space after //



ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterDecimalColumnInList.java
https://reviews.apache.org/r/19718/#comment71062

remove This is optimized



ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterDecimalColumnInList.java
https://reviews.apache.org/r/19718/#comment71061

see formatting comments for DecimalColumnInList


- Eric Hanson


On March 27, 2014, 7:02 a.m., Jitendra Pandey wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/19718/
 ---
 
 (Updated March 27, 2014, 7:02 a.m.)
 
 
 Review request for hive and Eric Hanson.
 
 
 Bugs: HIVE-6752
 https://issues.apache.org/jira/browse/HIVE-6752
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 Vectorized Between and IN expressions don't work with decimal, date types.
 
 
 Diffs
 -
 
   ant/src/org/apache/hadoop/hive/ant/GenVectorCode.java 44b0c59 
   ql/src/gen/vectorization/ExpressionTemplates/FilterDecimalColumnBetween.txt 
 PRE-CREATION 
   ql/src/java/org/apache/hadoop/hive/ql/exec/vector/VectorizationContext.java 
 96e74a9 
   
 ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/CastDateToString.java
  PRE-CREATION 
   
 ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/DecimalColumnInList.java
  PRE-CREATION 
   
 ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/FilterDecimalColumnInList.java
  PRE-CREATION 
   
 ql/src/java/org/apache/hadoop/hive/ql/exec/vector/expressions/IDecimalInExpr.java
  PRE-CREATION 
   ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/Vectorizer.java 
 c2240c0 
   ql/src/test/queries/clientpositive/vector_between_in.q PRE-CREATION 
   ql/src/test/results/clientpositive/vector_between_in.q.out PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/19718/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Jitendra Pandey
 




[jira] [Commented] (HIVE-6752) Vectorized Between and IN expressions don't work with decimal, date types.

2014-03-27 Thread Eric Hanson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949765#comment-13949765
 ] 

Eric Hanson commented on HIVE-6752:
---

+1

Conditional on addressing my comments in the code review. All of them are minor.

 Vectorized Between and IN expressions don't work with decimal, date types.
 --

 Key: HIVE-6752
 URL: https://issues.apache.org/jira/browse/HIVE-6752
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.0
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: HIVE-6752.1.patch


 Vectorized Between and IN expressions don't work with decimal, date types.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6547) normalize struct Role in metastore thrift interface

2014-03-27 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-6547:


Attachment: HIVE-6547.nothriftgen.1.patch

 normalize struct Role in metastore thrift interface
 ---

 Key: HIVE-6547
 URL: https://issues.apache.org/jira/browse/HIVE-6547
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Thrift API
Affects Versions: 0.13.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.13.0

 Attachments: HIVE-6547.1.patch, HIVE-6547.nothriftgen.1.patch, 
 HIVE-6547.thriftapi.2.patch, HIVE-6547.thriftapi.patch


 As discussed in HIVE-5931, it will be cleaner to have the information about 
 Role to role member mapping removed from the Role object, as it is not part 
 of a logical Role. This information not relevant for actions such as creating 
 a Role.
 As part of this change  get_role_grants_for_principal api will be added, so 
 that it can be used in place of  list_roles, when role mapping information is 
 desired.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6547) normalize struct Role in metastore thrift interface

2014-03-27 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-6547:


Attachment: HIVE-6547.1.patch

HIVE-6547.1.patch - includes thrift gen files


 normalize struct Role in metastore thrift interface
 ---

 Key: HIVE-6547
 URL: https://issues.apache.org/jira/browse/HIVE-6547
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Thrift API
Affects Versions: 0.13.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.13.0

 Attachments: HIVE-6547.1.patch, HIVE-6547.nothriftgen.1.patch, 
 HIVE-6547.thriftapi.2.patch, HIVE-6547.thriftapi.patch


 As discussed in HIVE-5931, it will be cleaner to have the information about 
 Role to role member mapping removed from the Role object, as it is not part 
 of a logical Role. This information not relevant for actions such as creating 
 a Role.
 As part of this change  get_role_grants_for_principal api will be added, so 
 that it can be used in place of  list_roles, when role mapping information is 
 desired.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6547) normalize struct Role in metastore thrift interface

2014-03-27 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-6547:


Status: Patch Available  (was: Open)

 normalize struct Role in metastore thrift interface
 ---

 Key: HIVE-6547
 URL: https://issues.apache.org/jira/browse/HIVE-6547
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Thrift API
Affects Versions: 0.13.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.13.0

 Attachments: HIVE-6547.1.patch, HIVE-6547.nothriftgen.1.patch, 
 HIVE-6547.thriftapi.2.patch, HIVE-6547.thriftapi.patch


 As discussed in HIVE-5931, it will be cleaner to have the information about 
 Role to role member mapping removed from the Role object, as it is not part 
 of a logical Role. This information not relevant for actions such as creating 
 a Role.
 As part of this change  get_role_grants_for_principal api will be added, so 
 that it can be used in place of  list_roles, when role mapping information is 
 desired.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6319) Insert, update, delete functionality needs a compactor

2014-03-27 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-6319:
-

Attachment: HIVE-6319.patch

 Insert, update, delete functionality needs a compactor
 --

 Key: HIVE-6319
 URL: https://issues.apache.org/jira/browse/HIVE-6319
 Project: Hive
  Issue Type: Sub-task
Reporter: Alan Gates
Assignee: Alan Gates
 Fix For: 0.13.0

 Attachments: 6319.wip.patch, HIVE-6319.patch, HiveCompactorDesign.pdf


 In order to keep the number of delta files from spiraling out of control we 
 need a compactor to collect these delta files together, and eventually 
 rewrite the base file when the deltas get large enough.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HIVE-3272) RetryingRawStore will perform partial transaction on retry

2014-03-27 Thread Szehon Ho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho resolved HIVE-3272.
-

Resolution: Duplicate

Its fixed in HIVE-4996 as RetryingRawStore is removed.

 RetryingRawStore will perform partial transaction on retry
 --

 Key: HIVE-3272
 URL: https://issues.apache.org/jira/browse/HIVE-3272
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.10.0
Reporter: Kevin Wilfong
Priority: Critical

 By the time the RetryingRawStore retries a command the transaction 
 encompassing it has already been rolled back.  This means that it will 
 perform the remainder of the raw store commands outside of a transaction, 
 unless there is another one encapsulating it which is definitely not always 
 the case, and then fail when it tries to commit the transaction as there is 
 none open.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6546) WebHCat job submission for pig with -useHCatalog argument fails on Windows

2014-03-27 Thread Harish Butani (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949787#comment-13949787
 ] 

Harish Butani commented on HIVE-6546:
-

+1 for 0.13

 WebHCat job submission for pig with -useHCatalog argument fails on Windows
 --

 Key: HIVE-6546
 URL: https://issues.apache.org/jira/browse/HIVE-6546
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Affects Versions: 0.11.0, 0.12.0, 0.13.0, 0.14.0
 Environment: HDInsight deploying HDP 1.3:  
 c:\apps\dist\pig-0.11.0.1.3.2.0-05
 Also on Windows HDP 1.3 one-box configuration.
Reporter: Eric Hanson
Assignee: Eric Hanson
 Fix For: 0.14.0

 Attachments: HIVE-6546.01.patch, HIVE-6546.02.patch, 
 HIVE-6546.03.patch, HIVE-6546.03.patch, HIVE-6546.03.patch


 On a one-box windows setup, do the following from a powershell prompt:
 cmd /c curl.exe -s `
   -d user.name=hadoop `
   -d arg=-useHCatalog `
   -d execute=emp = load '/data/emp/emp_0.dat'; dump emp; `
   -d statusdir=/tmp/webhcat.output01 `
   'http://localhost:50111/templeton/v1/pig' -v
 The job fails with error code 7, but it should run. 
 I traced this down to the following. In the job configuration for the 
 TempletonJobController, we have templeton.args set to
 cmd,/c,call,C:\\hadooppig-0.11.0.1.3.0.0-0846/bin/pig.cmd,-D__WEBHCAT_TOKEN_FILE_LOCATION__=-useHCatalog,-execute,emp
  = load '/data/emp/emp_0.dat'; dump emp;
 Notice the = sign before -useHCatalog. I think this should be a comma.
 The bad string D__WEBHCAT_TOKEN_FILE_LOCATION__=-useHCatalog gets created 
 in  org.apache.hadoop.util.GenericOptionsParser.preProcessForWindows().
 It happens at line 434:
 {code}
   } else {
   if (i  args.length - 1) {
 prop += = + args[++i];   // RIGHT HERE! at iterations i = 37, 38
   }
 }
 {code}
 Bug is here:
 {code}
   if (prop != null) {
 if (prop.contains(=)) {  // -D__WEBHCAT_TOKEN_FILE_LOCATION__ does 
 not contain equal, so else branch is run and appends =-useHCatalog,
   // everything good
 } else {
   if (i  args.length - 1) {
 prop += = + args[++i];
   }
 }
 newArgs.add(prop);
   }
 {code}
 One possible fix is to change the string constant 
 org.apache.hcatalog.templeton.tool.TempletonControllerJob.TOKEN_FILE_ARG_PLACEHOLDER
  to have an = sign in it. Or, preProcessForWindows() itself could be 
 changed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6546) WebHCat job submission for pig with -useHCatalog argument fails on Windows

2014-03-27 Thread Eric Hanson (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Hanson updated HIVE-6546:
--

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk

 WebHCat job submission for pig with -useHCatalog argument fails on Windows
 --

 Key: HIVE-6546
 URL: https://issues.apache.org/jira/browse/HIVE-6546
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Affects Versions: 0.11.0, 0.12.0, 0.13.0, 0.14.0
 Environment: HDInsight deploying HDP 1.3:  
 c:\apps\dist\pig-0.11.0.1.3.2.0-05
 Also on Windows HDP 1.3 one-box configuration.
Reporter: Eric Hanson
Assignee: Eric Hanson
 Fix For: 0.14.0

 Attachments: HIVE-6546.01.patch, HIVE-6546.02.patch, 
 HIVE-6546.03.patch, HIVE-6546.03.patch, HIVE-6546.03.patch


 On a one-box windows setup, do the following from a powershell prompt:
 cmd /c curl.exe -s `
   -d user.name=hadoop `
   -d arg=-useHCatalog `
   -d execute=emp = load '/data/emp/emp_0.dat'; dump emp; `
   -d statusdir=/tmp/webhcat.output01 `
   'http://localhost:50111/templeton/v1/pig' -v
 The job fails with error code 7, but it should run. 
 I traced this down to the following. In the job configuration for the 
 TempletonJobController, we have templeton.args set to
 cmd,/c,call,C:\\hadooppig-0.11.0.1.3.0.0-0846/bin/pig.cmd,-D__WEBHCAT_TOKEN_FILE_LOCATION__=-useHCatalog,-execute,emp
  = load '/data/emp/emp_0.dat'; dump emp;
 Notice the = sign before -useHCatalog. I think this should be a comma.
 The bad string D__WEBHCAT_TOKEN_FILE_LOCATION__=-useHCatalog gets created 
 in  org.apache.hadoop.util.GenericOptionsParser.preProcessForWindows().
 It happens at line 434:
 {code}
   } else {
   if (i  args.length - 1) {
 prop += = + args[++i];   // RIGHT HERE! at iterations i = 37, 38
   }
 }
 {code}
 Bug is here:
 {code}
   if (prop != null) {
 if (prop.contains(=)) {  // -D__WEBHCAT_TOKEN_FILE_LOCATION__ does 
 not contain equal, so else branch is run and appends =-useHCatalog,
   // everything good
 } else {
   if (i  args.length - 1) {
 prop += = + args[++i];
   }
 }
 newArgs.add(prop);
   }
 {code}
 One possible fix is to change the string constant 
 org.apache.hcatalog.templeton.tool.TempletonControllerJob.TOKEN_FILE_ARG_PLACEHOLDER
  to have an = sign in it. Or, preProcessForWindows() itself could be 
 changed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6735) Make scalable dynamic partitioning work in vectorized mode

2014-03-27 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949836#comment-13949836
 ] 

Hive QA commented on HIVE-6735:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12637049/HIVE-6735.3.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5492 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_root_dir_external_table
org.apache.hadoop.hive.metastore.TestRetryingHMSHandler.testRetryingHMSHandler
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1986/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/1986/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12637049

 Make scalable dynamic partitioning work in vectorized mode
 --

 Key: HIVE-6735
 URL: https://issues.apache.org/jira/browse/HIVE-6735
 Project: Hive
  Issue Type: Sub-task
  Components: Query Processor
Affects Versions: 0.13.0, 0.14.0
Reporter: Prasanth J
Assignee: Prasanth J
 Fix For: 0.13.0, 0.14.0

 Attachments: HIVE-6735.1.patch, HIVE-6735.2.patch, HIVE-6735.2.patch, 
 HIVE-6735.3.patch


 HIVE-6455 added support for scalable dynamic partitioning. This is subtask to 
 make HIVE-6455 work with vectorized operators.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6547) normalize struct Role in metastore thrift interface

2014-03-27 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-6547:


Attachment: (was: HIVE-6547.nothriftgen.1.patch)

 normalize struct Role in metastore thrift interface
 ---

 Key: HIVE-6547
 URL: https://issues.apache.org/jira/browse/HIVE-6547
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Thrift API
Affects Versions: 0.13.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.13.0

 Attachments: HIVE-6547.1.patch, HIVE-6547.thriftapi.2.patch, 
 HIVE-6547.thriftapi.patch


 As discussed in HIVE-5931, it will be cleaner to have the information about 
 Role to role member mapping removed from the Role object, as it is not part 
 of a logical Role. This information not relevant for actions such as creating 
 a Role.
 As part of this change  get_role_grants_for_principal api will be added, so 
 that it can be used in place of  list_roles, when role mapping information is 
 desired.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Review Request 19747: HIVE-6547 - normalize struct Role in metastore thrift interface

2014-03-27 Thread Thejas Nair

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19747/
---

Review request for hive and Ashutosh Chauhan.


Bugs: HIVE-6547
https://issues.apache.org/jira/browse/HIVE-6547


Repository: hive-git


Description
---

As discussed in HIVE-5931, it will be cleaner to have the information about 
Role to role member mapping removed from the Role object, as it is not part of 
a logical Role. This information not relevant for actions such as creating a 
Role.
As part of this change  get_role_grants_for_principal api will be added, so 
that it can be used in place of  list_roles, when role mapping information is 
desired.

Also cleans up additional fields  - principalname and principaltype in 'show 
role grant user user2 output, as that is redundant information. Also removes 
role createtime from this command output as that is not relevant to role grant 
information.


Diffs
-

  metastore/if/hive_metastore.thrift b3f01d6 
  metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 
d5c7ba7 
  metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java 
0550589 
  metastore/src/java/org/apache/hadoop/hive/metastore/IMetaStoreClient.java 
47c49aa 
  ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java e185f12 
  ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java ace6cb5 
  ql/src/java/org/apache/hadoop/hive/ql/plan/RoleDDLDesc.java bc9d47e 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveAccessController.java
 50bd592 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveAuthorizer.java
 48064c4 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveAuthorizerImpl.java
 2577ae5 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveRole.java
 7f3d78a 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HiveRoleGrant.java
 03f129a 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/GrantPrivAuthUtils.java
 fdbf3c3 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLAuthorizationUtils.java
 03d12ca 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAccessController.java
 5b24578 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/sqlstd/SQLStdHiveAuthorizationValidator.java
 7bb5a88 
  ql/src/test/queries/clientpositive/authorization_role_grant2.q 00a67a2 
  ql/src/test/results/clientnegative/authorization_fail_7.q.out 00e457d 
  ql/src/test/results/clientnegative/authorization_role_grant.q.out de17ae9 
  ql/src/test/results/clientpositive/authorization_1.q.out 916125b 
  ql/src/test/results/clientpositive/authorization_1_sql_std.q.out 2302da0 
  ql/src/test/results/clientpositive/authorization_5.q.out f1c07d0 
  ql/src/test/results/clientpositive/authorization_role_grant1.q.out 48e0f59 
  ql/src/test/results/clientpositive/authorization_role_grant2.q.out d08b906 
  ql/src/test/results/clientpositive/authorization_view_sqlstd.q.out 0a986e6 

Diff: https://reviews.apache.org/r/19747/diff/


Testing
---


Thanks,

Thejas Nair



[jira] [Commented] (HIVE-6319) Insert, update, delete functionality needs a compactor

2014-03-27 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949845#comment-13949845
 ] 

Alan Gates commented on HIVE-6319:
--

Ran tests locally, all looks good.

 Insert, update, delete functionality needs a compactor
 --

 Key: HIVE-6319
 URL: https://issues.apache.org/jira/browse/HIVE-6319
 Project: Hive
  Issue Type: Sub-task
Reporter: Alan Gates
Assignee: Alan Gates
 Fix For: 0.13.0

 Attachments: 6319.wip.patch, HIVE-6319.patch, HiveCompactorDesign.pdf


 In order to keep the number of delta files from spiraling out of control we 
 need a compactor to collect these delta files together, and eventually 
 rewrite the base file when the deltas get large enough.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6547) normalize struct Role in metastore thrift interface

2014-03-27 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-6547:


Attachment: HIVE-6547.nothriftgen.1.patch

 normalize struct Role in metastore thrift interface
 ---

 Key: HIVE-6547
 URL: https://issues.apache.org/jira/browse/HIVE-6547
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Thrift API
Affects Versions: 0.13.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.13.0

 Attachments: HIVE-6547.nothriftgen.1.patch, 
 HIVE-6547.thriftapi.2.patch, HIVE-6547.thriftapi.patch


 As discussed in HIVE-5931, it will be cleaner to have the information about 
 Role to role member mapping removed from the Role object, as it is not part 
 of a logical Role. This information not relevant for actions such as creating 
 a Role.
 As part of this change  get_role_grants_for_principal api will be added, so 
 that it can be used in place of  list_roles, when role mapping information is 
 desired.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6547) normalize struct Role in metastore thrift interface

2014-03-27 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-6547:


Attachment: (was: HIVE-6547.1.patch)

 normalize struct Role in metastore thrift interface
 ---

 Key: HIVE-6547
 URL: https://issues.apache.org/jira/browse/HIVE-6547
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Thrift API
Affects Versions: 0.13.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.13.0

 Attachments: HIVE-6547.nothriftgen.1.patch, 
 HIVE-6547.thriftapi.2.patch, HIVE-6547.thriftapi.patch


 As discussed in HIVE-5931, it will be cleaner to have the information about 
 Role to role member mapping removed from the Role object, as it is not part 
 of a logical Role. This information not relevant for actions such as creating 
 a Role.
 As part of this change  get_role_grants_for_principal api will be added, so 
 that it can be used in place of  list_roles, when role mapping information is 
 desired.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6319) Insert, update, delete functionality needs a compactor

2014-03-27 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-6319:
-

Status: Patch Available  (was: Open)

 Insert, update, delete functionality needs a compactor
 --

 Key: HIVE-6319
 URL: https://issues.apache.org/jira/browse/HIVE-6319
 Project: Hive
  Issue Type: Sub-task
Reporter: Alan Gates
Assignee: Alan Gates
 Fix For: 0.13.0

 Attachments: 6319.wip.patch, HIVE-6319.patch, HiveCompactorDesign.pdf


 In order to keep the number of delta files from spiraling out of control we 
 need a compactor to collect these delta files together, and eventually 
 rewrite the base file when the deltas get large enough.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6547) normalize struct Role in metastore thrift interface

2014-03-27 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-6547:


Attachment: HIVE-6547.1.patch

 normalize struct Role in metastore thrift interface
 ---

 Key: HIVE-6547
 URL: https://issues.apache.org/jira/browse/HIVE-6547
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Thrift API
Affects Versions: 0.13.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.13.0

 Attachments: HIVE-6547.1.patch, HIVE-6547.nothriftgen.1.patch, 
 HIVE-6547.thriftapi.2.patch, HIVE-6547.thriftapi.patch


 As discussed in HIVE-5931, it will be cleaner to have the information about 
 Role to role member mapping removed from the Role object, as it is not part 
 of a logical Role. This information not relevant for actions such as creating 
 a Role.
 As part of this change  get_role_grants_for_principal api will be added, so 
 that it can be used in place of  list_roles, when role mapping information is 
 desired.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6547) normalize struct Role in metastore thrift interface

2014-03-27 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949847#comment-13949847
 ] 

Thejas M Nair commented on HIVE-6547:
-

The patch also remioves additional fields  - principalname and principaltype in 
'show role grant user user2 output, as that is redundant information. Also 
removes role createtime from this command output as that is not relevant to 
role grant information.

 normalize struct Role in metastore thrift interface
 ---

 Key: HIVE-6547
 URL: https://issues.apache.org/jira/browse/HIVE-6547
 Project: Hive
  Issue Type: Bug
  Components: Metastore, Thrift API
Affects Versions: 0.13.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
 Fix For: 0.13.0

 Attachments: HIVE-6547.1.patch, HIVE-6547.nothriftgen.1.patch, 
 HIVE-6547.thriftapi.2.patch, HIVE-6547.thriftapi.patch


 As discussed in HIVE-5931, it will be cleaner to have the information about 
 Role to role member mapping removed from the Role object, as it is not part 
 of a logical Role. This information not relevant for actions such as creating 
 a Role.
 As part of this change  get_role_grants_for_principal api will be added, so 
 that it can be used in place of  list_roles, when role mapping information is 
 desired.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6735) Make scalable dynamic partitioning work in vectorized mode

2014-03-27 Thread Prasanth J (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949851#comment-13949851
 ] 

Prasanth J commented on HIVE-6735:
--

The test failures are unrelated. They pass locally on my system.

 Make scalable dynamic partitioning work in vectorized mode
 --

 Key: HIVE-6735
 URL: https://issues.apache.org/jira/browse/HIVE-6735
 Project: Hive
  Issue Type: Sub-task
  Components: Query Processor
Affects Versions: 0.13.0, 0.14.0
Reporter: Prasanth J
Assignee: Prasanth J
 Fix For: 0.13.0, 0.14.0

 Attachments: HIVE-6735.1.patch, HIVE-6735.2.patch, HIVE-6735.2.patch, 
 HIVE-6735.3.patch


 HIVE-6455 added support for scalable dynamic partitioning. This is subtask to 
 make HIVE-6455 work with vectorized operators.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6748) FileSinkOperator needs to cleanup held references for container reuse

2014-03-27 Thread Prasanth J (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949854#comment-13949854
 ] 

Prasanth J commented on HIVE-6748:
--

Test failure is unrelated.

 FileSinkOperator needs to cleanup held references for container reuse
 -

 Key: HIVE-6748
 URL: https://issues.apache.org/jira/browse/HIVE-6748
 Project: Hive
  Issue Type: Bug
  Components: Tez
Affects Versions: 0.13.0
Reporter: Gopal V
Assignee: Gopal V
 Attachments: HIVE-6748.1.patch


 The current implementation of FileSinkOperator runs into trouble when reusing 
 the same query pipeline aggressively with container reuse.
 This is due to a prevFSP writer which is left referenced after closeOp() and 
 which is not reset even for initializeOp().
 {code}
 014-03-25 14:46:31,744 FATAL [main] 
 org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor: 
 org.apache.hadoop.hive.ql.metadata.HiveException: 
 java.nio.channels.ClosedChannelException
 at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.closeWriters(FileSinkOperator.java:170)
 at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator.getDynOutPaths(FileSinkOperator.java:758)
 at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator.startGroup(FileSinkOperator.java:833)
 at 
 org.apache.hadoop.hive.ql.exec.Operator.defaultStartGroup(Operator.java:497)
 at 
 org.apache.hadoop.hive.ql.exec.Operator.startGroup(Operator.java:520)
 at 
 org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.processKeyValues(ReduceRecordProcessor.java:296)
 at 
 org.apache.hadoop.hive.ql.exec.tez.ReduceRecordProcessor.run(ReduceRecordProcessor.java:223)
 at 
 org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:159)
 at 
 org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:306)
 at 
 org.apache.hadoop.mapred.YarnTezDagChild$4.run(YarnTezDagChild.java:549)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 at 
 org.apache.hadoop.mapred.YarnTezDagChild.main(YarnTezDagChild.java:538)
 Caused by: java.nio.channels.ClosedChannelException
 at 
 org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1526)
 at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:98)
 at 
 org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
 at java.io.DataOutputStream.write(DataOutputStream.java:107)
 at 
 org.apache.hadoop.hive.ql.io.orc.WriterImpl$DirectStream.output(WriterImpl.java:316)
 at 
 org.apache.hadoop.hive.ql.io.orc.OutStream.flush(OutStream.java:242)
 at 
 org.apache.hadoop.hive.ql.io.orc.WriterImpl.writeMetadata(WriterImpl.java:1923)
 at 
 org.apache.hadoop.hive.ql.io.orc.WriterImpl.close(WriterImpl.java:2017)
 at 
 org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat$OrcRecordWriter.close(OrcOutputFormat.java:98)
 at 
 org.apache.hadoop.hive.ql.exec.FileSinkOperator$FSPaths.closeWriters(FileSinkOperator.java:167)
 ... 13 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6766) HCatLoader always returns Char datatype with maxlength(255) when table format is ORC

2014-03-27 Thread Eugene Koifman (JIRA)
Eugene Koifman created HIVE-6766:


 Summary: HCatLoader always returns Char datatype with 
maxlength(255)  when table format is ORC
 Key: HIVE-6766
 URL: https://issues.apache.org/jira/browse/HIVE-6766
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.13.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
Priority: Critical


attached patch contains
org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer#testWriteChar()

which shows that char(5) value written to Hive (ORC) table using HCatStorer 
will come back as char(255) when read with HCatLoader.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6767) Golden file updates for hadoop-2

2014-03-27 Thread Ashutosh Chauhan (JIRA)
Ashutosh Chauhan created HIVE-6767:
--

 Summary: Golden file updates for hadoop-2
 Key: HIVE-6767
 URL: https://issues.apache.org/jira/browse/HIVE-6767
 Project: Hive
  Issue Type: Task
  Components: Tests
Affects Versions: 0.13.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan






--
This message was sent by Atlassian JIRA
(v6.2#6252)


Review Request 19748: These tests are run only on hadoop-2. Golden files need to be updated for them.

2014-03-27 Thread Ashutosh Chauhan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19748/
---

Review request for hive and Vikram Dixit Kumaraswamy.


Bugs: HIVE-6767
https://issues.apache.org/jira/browse/HIVE-6767


Repository: hive


Description
---

These tests are run only on hadoop-2. Golden files need to be updated for them.


Diffs
-

  
trunk/ql/src/test/results/clientpositive/alter_numbuckets_partitioned_table2_h23.q.out
 1582450 
  
trunk/ql/src/test/results/clientpositive/alter_numbuckets_partitioned_table_h23.q.out
 1582450 
  
trunk/ql/src/test/results/clientpositive/infer_bucket_sort_reducers_power_two.q.out
 1582450 
  trunk/ql/src/test/results/clientpositive/list_bucket_dml_1.q.out 1582450 
  trunk/ql/src/test/results/clientpositive/list_bucket_dml_11.q.out 1582450 
  trunk/ql/src/test/results/clientpositive/list_bucket_dml_12.q.out 1582450 
  trunk/ql/src/test/results/clientpositive/list_bucket_dml_13.q.out 1582450 
  trunk/ql/src/test/results/clientpositive/list_bucket_dml_2.q.out 1582450 
  trunk/ql/src/test/results/clientpositive/list_bucket_dml_3.q.out 1582450 
  trunk/ql/src/test/results/clientpositive/list_bucket_dml_4.q.out 1582450 
  trunk/ql/src/test/results/clientpositive/list_bucket_dml_5.q.out 1582450 
  trunk/ql/src/test/results/clientpositive/list_bucket_dml_6.q.out 1582450 
  trunk/ql/src/test/results/clientpositive/list_bucket_dml_7.q.out 1582450 
  trunk/ql/src/test/results/clientpositive/list_bucket_dml_8.q.out 1582450 
  trunk/ql/src/test/results/clientpositive/list_bucket_dml_9.q.out 1582450 

Diff: https://reviews.apache.org/r/19748/diff/


Testing
---

Golden files updated.


Thanks,

Ashutosh Chauhan



[jira] [Updated] (HIVE-6767) Golden file updates for hadoop-2

2014-03-27 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6767:
---

Status: Patch Available  (was: Open)

 Golden file updates for hadoop-2
 

 Key: HIVE-6767
 URL: https://issues.apache.org/jira/browse/HIVE-6767
 Project: Hive
  Issue Type: Task
  Components: Tests
Affects Versions: 0.13.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-6767.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6767) Golden file updates for hadoop-2

2014-03-27 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6767:
---

Attachment: HIVE-6767.patch

 Golden file updates for hadoop-2
 

 Key: HIVE-6767
 URL: https://issues.apache.org/jira/browse/HIVE-6767
 Project: Hive
  Issue Type: Task
  Components: Tests
Affects Versions: 0.13.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-6767.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6766) HCatLoader always returns Char datatype with maxlength(255) when table format is ORC

2014-03-27 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-6766:
-

Status: Patch Available  (was: Open)

 HCatLoader always returns Char datatype with maxlength(255)  when table 
 format is ORC
 -

 Key: HIVE-6766
 URL: https://issues.apache.org/jira/browse/HIVE-6766
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.13.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
Priority: Critical
 Attachments: HIVE-6766.patch


 attached patch contains
 org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer#testWriteChar()
 which shows that char(5) value written to Hive (ORC) table using HCatStorer 
 will come back as char(255) when read with HCatLoader.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-3272) RetryingRawStore will perform partial transaction on retry

2014-03-27 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-3272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-3272:
---

Fix Version/s: 0.13.0

 RetryingRawStore will perform partial transaction on retry
 --

 Key: HIVE-3272
 URL: https://issues.apache.org/jira/browse/HIVE-3272
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.10.0
Reporter: Kevin Wilfong
Priority: Critical
 Fix For: 0.13.0


 By the time the RetryingRawStore retries a command the transaction 
 encompassing it has already been rolled back.  This means that it will 
 perform the remainder of the raw store commands outside of a transaction, 
 unless there is another one encapsulating it which is definitely not always 
 the case, and then fail when it tries to commit the transaction as there is 
 none open.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6676) hcat cli fails to run when running with hive on tez

2014-03-27 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949867#comment-13949867
 ] 

Thejas M Nair commented on HIVE-6676:
-

+1

 hcat cli fails to run when running with hive on tez
 ---

 Key: HIVE-6676
 URL: https://issues.apache.org/jira/browse/HIVE-6676
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.13.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
Priority: Blocker
 Fix For: 0.13.0

 Attachments: HIVE-6676.patch


 HIVE_CLASSPATH should be added to HADOOP_CLASSPATH before launching hcat CLI



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6697) HiveServer2 secure thrift/http authentication needs to support SPNego

2014-03-27 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949872#comment-13949872
 ] 

Alan Gates commented on HIVE-6697:
--

Ran tests on rebased patch, all looks good.

 HiveServer2 secure thrift/http authentication needs to support SPNego 
 --

 Key: HIVE-6697
 URL: https://issues.apache.org/jira/browse/HIVE-6697
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Reporter: Dilli Arumugam
Assignee: Dilli Arumugam
 Attachments: HIVE-6697.1.patch, HIVE-6697.2.patch, HIVE-6697.3.patch, 
 HIVE-6697.4.patch, hive-6697-req-impl-verify.md


 Looking to integrating Apache Knox to work with HiveServer2 secure 
 thrift/http.
 Found that thrift/http uses some form of Kerberos authentication that is not 
 SPNego. Considering it is going over http protocol, expected it to use SPNego 
 protocol.
 Apache Knox is already integrated with WebHDFS, WebHCat, Oozie and HBase 
 Stargate using SPNego for authentication.
 Requesting that HiveServer2 secure thrift/http authentication support SPNego.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6686) webhcat does not honour -Dlog4j.configuration=$WEBHCAT_LOG4J of log4j.properties file on local filesystem.

2014-03-27 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949874#comment-13949874
 ] 

Thejas M Nair commented on HIVE-6686:
-

Does that mean that a different fix is required with bigtop rpm ?


 webhcat does not honour -Dlog4j.configuration=$WEBHCAT_LOG4J of 
 log4j.properties file on local filesystem.
 --

 Key: HIVE-6686
 URL: https://issues.apache.org/jira/browse/HIVE-6686
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Affects Versions: 0.13.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Fix For: 0.13.0

 Attachments: HIVE-6686.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6757) Remove deprecated parquet classes from outside of org.apache package

2014-03-27 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949883#comment-13949883
 ] 

Owen O'Malley commented on HIVE-6757:
-

Hive does *not* have a need to maintain backwards compatibility with third 
party jars. The user installed third party jars and needs new versions to work 
with the current version of Hive. That doesn't mean that Hive should start 
publishing source code in the parquet namespace. 

There can't be any technical reason to block this patch. 
  * It removes unused java files. 
  * It does not break compatibility with any release of Hive
  * It prevents creating a new public API that starts deprecated.

This is straight forward goodness. It breaks no one and prevents downstream 
problems. I strongly encourage you to work with the parquet team to create a 
new version of their jar with these four compatibility classes.

 Remove deprecated parquet classes from outside of org.apache package
 

 Key: HIVE-6757
 URL: https://issues.apache.org/jira/browse/HIVE-6757
 Project: Hive
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
Priority: Blocker
 Fix For: 0.13.0

 Attachments: HIVE-6757.patch


 Apache shouldn't release projects with files outside of the org.apache 
 namespace.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6697) HiveServer2 secure thrift/http authentication needs to support SPNego

2014-03-27 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949881#comment-13949881
 ] 

Thejas M Nair commented on HIVE-6697:
-

[~rhbutani] It will be very useful to have this SPNEGO support fix in hive 0.13 
.


 HiveServer2 secure thrift/http authentication needs to support SPNego 
 --

 Key: HIVE-6697
 URL: https://issues.apache.org/jira/browse/HIVE-6697
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Reporter: Dilli Arumugam
Assignee: Dilli Arumugam
 Attachments: HIVE-6697.1.patch, HIVE-6697.2.patch, HIVE-6697.3.patch, 
 HIVE-6697.4.patch, hive-6697-req-impl-verify.md


 Looking to integrating Apache Knox to work with HiveServer2 secure 
 thrift/http.
 Found that thrift/http uses some form of Kerberos authentication that is not 
 SPNego. Considering it is going over http protocol, expected it to use SPNego 
 protocol.
 Apache Knox is already integrated with WebHDFS, WebHCat, Oozie and HBase 
 Stargate using SPNego for authentication.
 Requesting that HiveServer2 secure thrift/http authentication support SPNego.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6686) webhcat does not honour -Dlog4j.configuration=$WEBHCAT_LOG4J of log4j.properties file on local filesystem.

2014-03-27 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949892#comment-13949892
 ] 

Eugene Koifman commented on HIVE-6686:
--

no, it's exactly the same

 webhcat does not honour -Dlog4j.configuration=$WEBHCAT_LOG4J of 
 log4j.properties file on local filesystem.
 --

 Key: HIVE-6686
 URL: https://issues.apache.org/jira/browse/HIVE-6686
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Affects Versions: 0.13.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Fix For: 0.13.0

 Attachments: HIVE-6686.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-4975) Reading orc file throws exception after adding new column

2014-03-27 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949903#comment-13949903
 ] 

Owen O'Malley commented on HIVE-4975:
-

+1

 Reading orc file throws exception after adding new column
 -

 Key: HIVE-4975
 URL: https://issues.apache.org/jira/browse/HIVE-4975
 Project: Hive
  Issue Type: Bug
  Components: File Formats
Affects Versions: 0.11.0
 Environment: hive 0.11.0 hadoop 1.0.0
Reporter: cyril liao
Assignee: Kevin Wilfong
Priority: Critical
  Labels: orcfile
 Fix For: 0.13.0

 Attachments: HIVE-4975.1.patch.txt


 ORC file read failure after add table column.
 create a table which have three column .(a string,b string,c string).
 add a new column after c by executing ALTER TABLE table ADD COLUMNS (d 
 string).
 execute hiveql select d from table,the following exception goes:
 java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: 
 Hive Runtime Error while processing row [Error getting row data with 
 exception java.lang.ArrayIndexOutOfBoundsException: 4
   at 
 org.apache.hadoop.hive.ql.io.orc.OrcStruct$OrcStructInspector.getStructFieldData(OrcStruct.java:206)
   at 
 org.apache.hadoop.hive.serde2.objectinspector.UnionStructObjectInspector.getStructFieldData(UnionStructObjectInspector.java:128)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:371)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:236)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:222)
   at 
 org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:665)
   at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1083)
   at org.apache.hadoop.mapred.Child.main(Child.java:249)
  ]
   at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:162)
   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1083)
   at org.apache.hadoop.mapred.Child.main(Child.java:249)
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
 Error while processing row [Error getting row data with exception 
 java.lang.ArrayIndexOutOfBoundsException: 4
   at 
 org.apache.hadoop.hive.ql.io.orc.OrcStruct$OrcStructInspector.getStructFieldData(OrcStruct.java:206)
   at 
 org.apache.hadoop.hive.serde2.objectinspector.UnionStructObjectInspector.getStructFieldData(UnionStructObjectInspector.java:128)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:371)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:236)
   at 
 org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:222)
   at 
 org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:665)
   at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:436)
   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:372)
   at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:396)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1083)
   at org.apache.hadoop.mapred.Child.main(Child.java:249)
  ]
   at 
 org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:671)
   at org.apache.hadoop.hive.ql.exec.ExecMapper.map(ExecMapper.java:144)
   ... 8 more
 Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Error evaluating 
 d
   at 
 org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:80)
   at 

[jira] [Updated] (HIVE-6546) WebHCat job submission for pig with -useHCatalog argument fails on Windows

2014-03-27 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-6546:


Fix Version/s: (was: 0.14.0)
   0.13.0

 WebHCat job submission for pig with -useHCatalog argument fails on Windows
 --

 Key: HIVE-6546
 URL: https://issues.apache.org/jira/browse/HIVE-6546
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Affects Versions: 0.11.0, 0.12.0, 0.13.0, 0.14.0
 Environment: HDInsight deploying HDP 1.3:  
 c:\apps\dist\pig-0.11.0.1.3.2.0-05
 Also on Windows HDP 1.3 one-box configuration.
Reporter: Eric Hanson
Assignee: Eric Hanson
 Fix For: 0.13.0

 Attachments: HIVE-6546.01.patch, HIVE-6546.02.patch, 
 HIVE-6546.03.patch, HIVE-6546.03.patch, HIVE-6546.03.patch


 On a one-box windows setup, do the following from a powershell prompt:
 cmd /c curl.exe -s `
   -d user.name=hadoop `
   -d arg=-useHCatalog `
   -d execute=emp = load '/data/emp/emp_0.dat'; dump emp; `
   -d statusdir=/tmp/webhcat.output01 `
   'http://localhost:50111/templeton/v1/pig' -v
 The job fails with error code 7, but it should run. 
 I traced this down to the following. In the job configuration for the 
 TempletonJobController, we have templeton.args set to
 cmd,/c,call,C:\\hadooppig-0.11.0.1.3.0.0-0846/bin/pig.cmd,-D__WEBHCAT_TOKEN_FILE_LOCATION__=-useHCatalog,-execute,emp
  = load '/data/emp/emp_0.dat'; dump emp;
 Notice the = sign before -useHCatalog. I think this should be a comma.
 The bad string D__WEBHCAT_TOKEN_FILE_LOCATION__=-useHCatalog gets created 
 in  org.apache.hadoop.util.GenericOptionsParser.preProcessForWindows().
 It happens at line 434:
 {code}
   } else {
   if (i  args.length - 1) {
 prop += = + args[++i];   // RIGHT HERE! at iterations i = 37, 38
   }
 }
 {code}
 Bug is here:
 {code}
   if (prop != null) {
 if (prop.contains(=)) {  // -D__WEBHCAT_TOKEN_FILE_LOCATION__ does 
 not contain equal, so else branch is run and appends =-useHCatalog,
   // everything good
 } else {
   if (i  args.length - 1) {
 prop += = + args[++i];
   }
 }
 newArgs.add(prop);
   }
 {code}
 One possible fix is to change the string constant 
 org.apache.hcatalog.templeton.tool.TempletonControllerJob.TOKEN_FILE_ARG_PLACEHOLDER
  to have an = sign in it. Or, preProcessForWindows() itself could be 
 changed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6546) WebHCat job submission for pig with -useHCatalog argument fails on Windows

2014-03-27 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949911#comment-13949911
 ] 

Thejas M Nair commented on HIVE-6546:
-

Patch committed to 0.13 branch .


 WebHCat job submission for pig with -useHCatalog argument fails on Windows
 --

 Key: HIVE-6546
 URL: https://issues.apache.org/jira/browse/HIVE-6546
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Affects Versions: 0.11.0, 0.12.0, 0.13.0, 0.14.0
 Environment: HDInsight deploying HDP 1.3:  
 c:\apps\dist\pig-0.11.0.1.3.2.0-05
 Also on Windows HDP 1.3 one-box configuration.
Reporter: Eric Hanson
Assignee: Eric Hanson
 Fix For: 0.13.0

 Attachments: HIVE-6546.01.patch, HIVE-6546.02.patch, 
 HIVE-6546.03.patch, HIVE-6546.03.patch, HIVE-6546.03.patch


 On a one-box windows setup, do the following from a powershell prompt:
 cmd /c curl.exe -s `
   -d user.name=hadoop `
   -d arg=-useHCatalog `
   -d execute=emp = load '/data/emp/emp_0.dat'; dump emp; `
   -d statusdir=/tmp/webhcat.output01 `
   'http://localhost:50111/templeton/v1/pig' -v
 The job fails with error code 7, but it should run. 
 I traced this down to the following. In the job configuration for the 
 TempletonJobController, we have templeton.args set to
 cmd,/c,call,C:\\hadooppig-0.11.0.1.3.0.0-0846/bin/pig.cmd,-D__WEBHCAT_TOKEN_FILE_LOCATION__=-useHCatalog,-execute,emp
  = load '/data/emp/emp_0.dat'; dump emp;
 Notice the = sign before -useHCatalog. I think this should be a comma.
 The bad string D__WEBHCAT_TOKEN_FILE_LOCATION__=-useHCatalog gets created 
 in  org.apache.hadoop.util.GenericOptionsParser.preProcessForWindows().
 It happens at line 434:
 {code}
   } else {
   if (i  args.length - 1) {
 prop += = + args[++i];   // RIGHT HERE! at iterations i = 37, 38
   }
 }
 {code}
 Bug is here:
 {code}
   if (prop != null) {
 if (prop.contains(=)) {  // -D__WEBHCAT_TOKEN_FILE_LOCATION__ does 
 not contain equal, so else branch is run and appends =-useHCatalog,
   // everything good
 } else {
   if (i  args.length - 1) {
 prop += = + args[++i];
   }
 }
 newArgs.add(prop);
   }
 {code}
 One possible fix is to change the string constant 
 org.apache.hcatalog.templeton.tool.TempletonControllerJob.TOKEN_FILE_ARG_PLACEHOLDER
  to have an = sign in it. Or, preProcessForWindows() itself could be 
 changed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6757) Remove deprecated parquet classes from outside of org.apache package

2014-03-27 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949916#comment-13949916
 ] 

Brock Noland commented on HIVE-6757:


bq. Hive does not have a need to maintain backwards compatibility with third 
party jars. 

Simply because we do not have to, does not mean we cannot. More simply there is 
no policy saying we cannot maintain backwards compatibility with existing 
Parquet users. The work was done by the Hive developer community for the Hive 
user community.

bq. There can't be any technical reason to block this patch.

Breaking Hive users, part of the Hive community, is a technical reason.

 Remove deprecated parquet classes from outside of org.apache package
 

 Key: HIVE-6757
 URL: https://issues.apache.org/jira/browse/HIVE-6757
 Project: Hive
  Issue Type: Bug
Reporter: Owen O'Malley
Assignee: Owen O'Malley
Priority: Blocker
 Fix For: 0.13.0

 Attachments: HIVE-6757.patch


 Apache shouldn't release projects with files outside of the org.apache 
 namespace.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6734) DDL locking too course grained in new db txn manager

2014-03-27 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13949921#comment-13949921
 ] 

Ashutosh Chauhan commented on HIVE-6734:


Left some comments on RB.

 DDL locking too course grained in new db txn manager
 

 Key: HIVE-6734
 URL: https://issues.apache.org/jira/browse/HIVE-6734
 Project: Hive
  Issue Type: Bug
  Components: Locking
Affects Versions: 0.13.0
Reporter: Alan Gates
Assignee: Alan Gates
 Fix For: 0.13.0

 Attachments: HIVE-6734.patch


 All DDL operations currently acquire an exclusive lock.  This is too course 
 grained, as some operations like alter table add partition shouldn't get an 
 exclusive lock on the entire table.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >