[jira] [Updated] (HIVE-6765) ASTNodeOrigin unserializable leads to fail when join with view

2014-04-01 Thread Adrian Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrian Wang updated HIVE-6765:
--

Fix Version/s: 0.13.0

 ASTNodeOrigin unserializable leads to fail when join with view
 --

 Key: HIVE-6765
 URL: https://issues.apache.org/jira/browse/HIVE-6765
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Adrian Wang
 Fix For: 0.13.0

 Attachments: HIVE-6765.patch.1


 when a view contains a UDF, and the view comes into a JOIN operation, Hive 
 will encounter a bug with stack trace like
 Caused by: java.lang.InstantiationException: 
 org.apache.hadoop.hive.ql.parse.ASTNodeOrigin
   at java.lang.Class.newInstance0(Class.java:359)
   at java.lang.Class.newInstance(Class.java:327)
   at sun.reflect.GeneratedMethodAccessor84.invoke(Unknown Source)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:616)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-5998) Add vectorized reader for Parquet files

2014-04-01 Thread Remus Rusanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remus Rusanu updated HIVE-5998:
---

Status: Open  (was: Patch Available)

 Add vectorized reader for Parquet files
 ---

 Key: HIVE-5998
 URL: https://issues.apache.org/jira/browse/HIVE-5998
 Project: Hive
  Issue Type: Sub-task
  Components: Serializers/Deserializers, Vectorization
Reporter: Remus Rusanu
Assignee: Remus Rusanu
Priority: Minor
  Labels: Parquet, vectorization
 Attachments: HIVE-5998.1.patch, HIVE-5998.10.patch, 
 HIVE-5998.2.patch, HIVE-5998.3.patch, HIVE-5998.4.patch, HIVE-5998.5.patch, 
 HIVE-5998.6.patch, HIVE-5998.7.patch, HIVE-5998.8.patch, HIVE-5998.9.patch


 HIVE-5783 is adding native Parquet support in Hive. As Parquet is a columnar 
 format, it makes sense to provide a vectorized reader, similar to how RC and 
 ORC formats have, to benefit from vectorized execution engine.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-5998) Add vectorized reader for Parquet files

2014-04-01 Thread Remus Rusanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remus Rusanu updated HIVE-5998:
---

Attachment: HIVE-5998.11.patch

Rebased to current trunk, updated expected results with parquet serder 
'comment: null'.

 Add vectorized reader for Parquet files
 ---

 Key: HIVE-5998
 URL: https://issues.apache.org/jira/browse/HIVE-5998
 Project: Hive
  Issue Type: Sub-task
  Components: Serializers/Deserializers, Vectorization
Reporter: Remus Rusanu
Assignee: Remus Rusanu
Priority: Minor
  Labels: Parquet, vectorization
 Attachments: HIVE-5998.1.patch, HIVE-5998.10.patch, 
 HIVE-5998.11.patch, HIVE-5998.2.patch, HIVE-5998.3.patch, HIVE-5998.4.patch, 
 HIVE-5998.5.patch, HIVE-5998.6.patch, HIVE-5998.7.patch, HIVE-5998.8.patch, 
 HIVE-5998.9.patch


 HIVE-5783 is adding native Parquet support in Hive. As Parquet is a columnar 
 format, it makes sense to provide a vectorized reader, similar to how RC and 
 ORC formats have, to benefit from vectorized execution engine.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-5998) Add vectorized reader for Parquet files

2014-04-01 Thread Remus Rusanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remus Rusanu updated HIVE-5998:
---

Status: Patch Available  (was: Open)

 Add vectorized reader for Parquet files
 ---

 Key: HIVE-5998
 URL: https://issues.apache.org/jira/browse/HIVE-5998
 Project: Hive
  Issue Type: Sub-task
  Components: Serializers/Deserializers, Vectorization
Reporter: Remus Rusanu
Assignee: Remus Rusanu
Priority: Minor
  Labels: Parquet, vectorization
 Attachments: HIVE-5998.1.patch, HIVE-5998.10.patch, 
 HIVE-5998.11.patch, HIVE-5998.2.patch, HIVE-5998.3.patch, HIVE-5998.4.patch, 
 HIVE-5998.5.patch, HIVE-5998.6.patch, HIVE-5998.7.patch, HIVE-5998.8.patch, 
 HIVE-5998.9.patch


 HIVE-5783 is adding native Parquet support in Hive. As Parquet is a columnar 
 format, it makes sense to provide a vectorized reader, similar to how RC and 
 ORC formats have, to benefit from vectorized execution engine.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-5998) Add vectorized reader for Parquet files

2014-04-01 Thread Remus Rusanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remus Rusanu updated HIVE-5998:
---

Attachment: HIVE-5998.11.patch

Now with ANSI encoding...

 Add vectorized reader for Parquet files
 ---

 Key: HIVE-5998
 URL: https://issues.apache.org/jira/browse/HIVE-5998
 Project: Hive
  Issue Type: Sub-task
  Components: Serializers/Deserializers, Vectorization
Reporter: Remus Rusanu
Assignee: Remus Rusanu
Priority: Minor
  Labels: Parquet, vectorization
 Attachments: HIVE-5998.1.patch, HIVE-5998.10.patch, 
 HIVE-5998.11.patch, HIVE-5998.2.patch, HIVE-5998.3.patch, HIVE-5998.4.patch, 
 HIVE-5998.5.patch, HIVE-5998.6.patch, HIVE-5998.7.patch, HIVE-5998.8.patch, 
 HIVE-5998.9.patch


 HIVE-5783 is adding native Parquet support in Hive. As Parquet is a columnar 
 format, it makes sense to provide a vectorized reader, similar to how RC and 
 ORC formats have, to benefit from vectorized execution engine.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-5998) Add vectorized reader for Parquet files

2014-04-01 Thread Remus Rusanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remus Rusanu updated HIVE-5998:
---

Attachment: (was: HIVE-5998.11.patch)

 Add vectorized reader for Parquet files
 ---

 Key: HIVE-5998
 URL: https://issues.apache.org/jira/browse/HIVE-5998
 Project: Hive
  Issue Type: Sub-task
  Components: Serializers/Deserializers, Vectorization
Reporter: Remus Rusanu
Assignee: Remus Rusanu
Priority: Minor
  Labels: Parquet, vectorization
 Attachments: HIVE-5998.1.patch, HIVE-5998.10.patch, 
 HIVE-5998.11.patch, HIVE-5998.2.patch, HIVE-5998.3.patch, HIVE-5998.4.patch, 
 HIVE-5998.5.patch, HIVE-5998.6.patch, HIVE-5998.7.patch, HIVE-5998.8.patch, 
 HIVE-5998.9.patch


 HIVE-5783 is adding native Parquet support in Hive. As Parquet is a columnar 
 format, it makes sense to provide a vectorized reader, similar to how RC and 
 ORC formats have, to benefit from vectorized execution engine.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6739) Hive HBase query fails on Tez due to missing jars and then due to NPE in getSplits

2014-04-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956241#comment-13956241
 ] 

Hive QA commented on HIVE-6739:
---



{color:red}Overall{color}: -1 no tests executed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12637961/HIVE-6739.01.patch

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2060/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2060/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ [[ -n '' ]]
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hive-ptest/working/
+ tee /data/hive-ptest/logs/PreCommit-HIVE-Build-2060/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ svn = \s\v\n ]]
+ [[ -n '' ]]
+ [[ -d apache-svn-trunk-source ]]
+ [[ ! -d apache-svn-trunk-source/.svn ]]
+ [[ ! -d apache-svn-trunk-source ]]
+ cd apache-svn-trunk-source
+ svn revert -R .
Reverted 
'shims/common-secure/src/main/java/org/apache/hadoop/hive/shims/HadoopShimsSecure.java'
++ awk '{print $2}'
++ egrep -v '^X|^Performing status on external'
++ svn status --no-ignore
+ rm -rf target datanucleus.log ant/target shims/target shims/0.20/target 
shims/0.20S/target shims/0.23/target shims/aggregator/target 
shims/common/target shims/common-secure/target packaging/target 
hbase-handler/target testutils/target jdbc/target metastore/target 
itests/target itests/hcatalog-unit/target itests/test-serde/target 
itests/qtest/target itests/hive-unit/target itests/custom-serde/target 
itests/util/target hcatalog/target hcatalog/storage-handlers/hbase/target 
hcatalog/server-extensions/target hcatalog/core/target 
hcatalog/webhcat/svr/target hcatalog/webhcat/java-client/target 
hcatalog/hcatalog-pig-adapter/target hwi/target common/target common/src/gen 
service/target contrib/target serde/target beeline/target odbc/target 
cli/target ql/dependency-reduced-pom.xml ql/target
+ svn update

Fetching external item into 'hcatalog/src/test/e2e/harness'
External at revision 1583571.

At revision 1583571.
+ patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hive-ptest/working/scratch/build.patch
+ [[ -f /data/hive-ptest/working/scratch/build.patch ]]
+ chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh
+ /data/hive-ptest/working/scratch/smart-apply-patch.sh 
/data/hive-ptest/working/scratch/build.patch
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12637961

 Hive HBase query fails on Tez due to missing jars and then due to NPE in 
 getSplits
 --

 Key: HIVE-6739
 URL: https://issues.apache.org/jira/browse/HIVE-6739
 Project: Hive
  Issue Type: Bug
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.13.0

 Attachments: HIVE-6739.01.patch, HIVE-6739.patch, 
 HIVE-6739.preliminary.patch


 Tez paths in Hive never call configure on the input/output operators, so 
 (among other things, potentially) requisite files never get added to the job



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6800) HiveServer2 is not passing proxy user setting through hive-site

2014-04-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956237#comment-13956237
 ] 

Hive QA commented on HIVE-6800:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12637963/HIVE-6800.1.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5513 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_infer_bucket_sort_dyn_part
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2059/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2059/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12637963

 HiveServer2 is not passing proxy user setting through hive-site
 ---

 Key: HIVE-6800
 URL: https://issues.apache.org/jira/browse/HIVE-6800
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.13.0
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.13.0

 Attachments: HIVE-6800.1.patch


 Setting the following in core-site.xml works fine in a secure cluster with 
 hive.server2.allow.user.substitution set to true:
 {code}
 property
   namehadoop.proxyuser.user1.groups/name
   valueusers/value
 /property
 
 property
   namehadoop.proxyuser.user1.hosts/name
   value*/value
 /property
 {code}
 where user1 will be proxying for user2:
 {code}
 !connect 
 jdbc:hive2:/myhostname:1/;principal=hive/_h...@example.com;hive.server2.proxy.user=user2
  user1 fakepwd org.apache.hive.jdbc.HiveDriver
 {code}
 However, setting this in hive-site.xml throws Failed to validate proxy 
 privilage exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6780) Set tez credential file property along with MR conf property for Tez jobs

2014-04-01 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956258#comment-13956258
 ] 

Thejas M Nair commented on HIVE-6780:
-

This needs rebasing post HIVE-6546 . We should probably include similar fix for 
the new param as well. [~ekoifman] Can you please take a look ?


 Set tez credential file property along with MR conf property for Tez jobs
 -

 Key: HIVE-6780
 URL: https://issues.apache.org/jira/browse/HIVE-6780
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Affects Versions: 0.13.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Attachments: HIVE-6780.patch


 webhcat should set the additional property - tez.credentials.path to the 
 same value as the MapReduce property.
 WebHCat should always proactively set this tez.credentials.path property to 
 the same value and in the same cases where it is setting the MR equivalent 
 property.
 NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6804) sql std auth - granting existing table privilege to owner should result in error

2014-04-01 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-6804:


Reporter: Deepesh Khandelwal  (was: Thejas M Nair)

 sql std auth - granting existing table privilege to owner should result in 
 error
 

 Key: HIVE-6804
 URL: https://issues.apache.org/jira/browse/HIVE-6804
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Deepesh Khandelwal
Assignee: Thejas M Nair

 Table owner gets all privileges on the table at the time of table creation.
 But granting some or all of the privileges using grant statement still works 
 resulting in duplicate privileges. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6804) sql std auth - granting existing table privilege to owner should result in error

2014-04-01 Thread Thejas M Nair (JIRA)
Thejas M Nair created HIVE-6804:
---

 Summary: sql std auth - granting existing table privilege to owner 
should result in error
 Key: HIVE-6804
 URL: https://issues.apache.org/jira/browse/HIVE-6804
 Project: Hive
  Issue Type: Sub-task
  Components: Authorization
Reporter: Thejas M Nair
Assignee: Thejas M Nair


Table owner gets all privileges on the table at the time of table creation.
But granting some or all of the privileges using grant statement still works 
resulting in duplicate privileges. 




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6804) sql std auth - granting existing table privilege to owner should result in error

2014-04-01 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956269#comment-13956269
 ] 

Thejas M Nair commented on HIVE-6804:
-

From [~deepesh]
Steps to reproduce:
# Login as a public user (eg. hrt_1).
{noformat}
0: jdbc:hive2://localhost:10 create table foobar (foo string, bar string);
No rows affected (0.167 seconds)
0: jdbc:hive2://localhost:10 show grant on table foobar;
+---+-++-+-+-++---++
| database  |  table  | partition  | column  | principal_name  | principal_type 
 | privilege  | grant_option  |   gran |
+---+-++-+-+-++---++
| default   | foobar  || | hrt_1   | USER   
 | DELETE | true  | 139629 |
| default   | foobar  || | hrt_1   | USER   
 | INSERT | true  | 139629 |
| default   | foobar  || | hrt_1   | USER   
 | SELECT | true  | 139629 |
| default   | foobar  || | hrt_1   | USER   
 | UPDATE | true  | 139629 |
+---+-++-+-+-++---++
4 rows selected (0.043 seconds)
0: jdbc:hive2://localhost:10 grant all on table foobar to user hrt_1 with 
grant option;
No rows affected (0.171 seconds)
0: jdbc:hive2://localhost:10 show grant on table foobar;
+---+-++-+-+-++---++
| database  |  table  | partition  | column  | principal_name  | principal_type 
 | privilege  | grant_option  |   gran |
+---+-++-+-+-++---++
| default   | foobar  || | hrt_1   | USER   
 | DELETE | true  | 139629 |
| default   | foobar  || | hrt_1   | USER   
 | DELETE | true  | 139629 |
| default   | foobar  || | hrt_1   | USER   
 | INSERT | true  | 139629 |
| default   | foobar  || | hrt_1   | USER   
 | INSERT | true  | 139629 |
| default   | foobar  || | hrt_1   | USER   
 | SELECT | true  | 139629 |
| default   | foobar  || | hrt_1   | USER   
 | SELECT | true  | 139629 |
| default   | foobar  || | hrt_1   | USER   
 | UPDATE | true  | 139629 |
| default   | foobar  || | hrt_1   | USER   
 | UPDATE | true  | 139629 |
+---+-++-+-+-++---++
8 rows selected (0.046 seconds)
{noformat}
I would not expect duplicate entries, either we should error out when we try to 
grant privileges on a table where user already has privileges or the command 
become a NOOP.
# Now try grant another time and revoke.
{noformat}
0: jdbc:hive2://localhost:10 grant all on table foobar to user hrt_1 with 
grant option;
Error: Error while processing statement: FAILED: Execution Error, return code 1 
from org.apache.hadoop.hive.ql.exec.DDLTask. Error granting privileges: 
Internal error processing grant_privileges (state=08S01,code=1)
0: jdbc:hive2://localhost:10 show grant on table foobar;
+---+-++-+-+-++---++
| database  |  table  | partition  | column  | principal_name  | principal_type 
 | privilege  | grant_option  |   gran |
+---+-++-+-+-++---++
| default   | foobar  || | hrt_1   | USER   
 | DELETE | true  | 139629 |
| default   | foobar  || | hrt_1   | USER   
 | DELETE | true  | 139629 |
| default   | foobar  || | hrt_1   | USER   
 | INSERT | true  | 139629 |
| default   | foobar  || | hrt_1   | USER   
 | INSERT | true  | 139629 |
| default   | foobar  || | hrt_1   | USER   
 | SELECT | true  | 139629 |
| default   | foobar  || | hrt_1   | USER   
 | SELECT | true  | 139629 |
| default   | foobar  || | hrt_1   | USER   
 | UPDATE | true  | 139629 |
| 

[jira] [Updated] (HIVE-6804) sql std auth - granting existing table privilege to owner should result in error

2014-04-01 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-6804:


Issue Type: Bug  (was: Sub-task)
Parent: (was: HIVE-5837)

 sql std auth - granting existing table privilege to owner should result in 
 error
 

 Key: HIVE-6804
 URL: https://issues.apache.org/jira/browse/HIVE-6804
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Reporter: Deepesh Khandelwal
Assignee: Thejas M Nair

 Table owner gets all privileges on the table at the time of table creation.
 But granting some or all of the privileges using grant statement still works 
 resulting in duplicate privileges. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6805) metastore api should consider privileges to be case insensitive

2014-04-01 Thread Thejas M Nair (JIRA)
Thejas M Nair created HIVE-6805:
---

 Summary: metastore api should consider privileges to be case 
insensitive
 Key: HIVE-6805
 URL: https://issues.apache.org/jira/browse/HIVE-6805
 Project: Hive
  Issue Type: Bug
Reporter: Thejas M Nair
Assignee: Thejas M Nair


Metastore api has some code that considers privileges to be case sensitive.
This needs to be corrected.
For example in ObjectStore.grantPrivileges, the following check does a case 
sensitive comparison -
{code}
for (String privilege : privs) {
  if (privSet.contains(privilege)) {
throw new InvalidObjectException(privilege
+  is already granted by  + grantor);
  }
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6804) sql std auth - granting existing table privilege to owner should result in error

2014-04-01 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-6804:


Attachment: HIVE-6804.1.patch

 sql std auth - granting existing table privilege to owner should result in 
 error
 

 Key: HIVE-6804
 URL: https://issues.apache.org/jira/browse/HIVE-6804
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Reporter: Deepesh Khandelwal
Assignee: Thejas M Nair
 Attachments: HIVE-6804.1.patch


 Table owner gets all privileges on the table at the time of table creation.
 But granting some or all of the privileges using grant statement still works 
 resulting in duplicate privileges. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6795) metastore initialization should add default roles with default, SBA

2014-04-01 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956284#comment-13956284
 ] 

Thejas M Nair commented on HIVE-6795:
-

[~rhbutani] I think we should include this in 0.13, this is a small change that 
makes new authorization setup more flexible.


 metastore initialization should add default roles with default, SBA
 ---

 Key: HIVE-6795
 URL: https://issues.apache.org/jira/browse/HIVE-6795
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Affects Versions: 0.13.0
Reporter: Deepesh Khandelwal
Assignee: Thejas M Nair
 Attachments: HIVE-6795.1.patch


 Hiveserver2 running sql standard authorization can connect to a metastore 
 running storage based authorization. Currently metastore is not adding the 
 standard roles to the db in such cases.
 It would be better to add them in these cases as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6133) Support partial partition exchange

2014-04-01 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-6133:


Attachment: HIVE-6133.1.patch.txt

 Support partial partition exchange
 --

 Key: HIVE-6133
 URL: https://issues.apache.org/jira/browse/HIVE-6133
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: HIVE-6133.1.patch.txt


 Current alter exchange coerces source and destination table to have same 
 partition columns. But source table has sub-set of partitions and provided 
 partition spec supplements to be a complete partition spec, it need not to be 
 that.
 For example, 
 {noformat}
 CREATE TABLE exchange_part_test1 (f1 string) PARTITIONED BY (ds STRING);
 CREATE TABLE exchange_part_test2 (f1 string) 
 ALTER TABLE exchange_part_test1 EXCHANGE PARTITION (ds='2013-04-05') WITH 
 TABLE exchange_part_test2;
 {noformat}
 or 
 {noformat}
 CREATE TABLE exchange_part_test1 (f1 string) PARTITIONED BY (ds STRING, hr 
 STRING);
 CREATE TABLE exchange_part_test2 (f1 string) PARTITIONED BY (hr STRING)
 ALTER TABLE exchange_part_test1 EXCHANGE PARTITION (ds='2013-04-05') WITH 
 TABLE exchange_part_test2;
 {noformat}
 can be possible.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6804) sql std auth - granting existing table privilege to owner should result in error

2014-04-01 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956285#comment-13956285
 ] 

Thejas M Nair commented on HIVE-6804:
-

[~rhbutani] This is an important bug fix for sql std auth. I think we should 
include this in 0.13 .


 sql std auth - granting existing table privilege to owner should result in 
 error
 

 Key: HIVE-6804
 URL: https://issues.apache.org/jira/browse/HIVE-6804
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Reporter: Deepesh Khandelwal
Assignee: Thejas M Nair
 Attachments: HIVE-6804.1.patch


 Table owner gets all privileges on the table at the time of table creation.
 But granting some or all of the privileges using grant statement still works 
 resulting in duplicate privileges. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6133) Support partial partition exchange

2014-04-01 Thread Navis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Navis updated HIVE-6133:


Status: Patch Available  (was: Open)

 Support partial partition exchange
 --

 Key: HIVE-6133
 URL: https://issues.apache.org/jira/browse/HIVE-6133
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: HIVE-6133.1.patch.txt


 Current alter exchange coerces source and destination table to have same 
 partition columns. But source table has sub-set of partitions and provided 
 partition spec supplements to be a complete partition spec, it need not to be 
 that.
 For example, 
 {noformat}
 CREATE TABLE exchange_part_test1 (f1 string) PARTITIONED BY (ds STRING);
 CREATE TABLE exchange_part_test2 (f1 string) 
 ALTER TABLE exchange_part_test1 EXCHANGE PARTITION (ds='2013-04-05') WITH 
 TABLE exchange_part_test2;
 {noformat}
 or 
 {noformat}
 CREATE TABLE exchange_part_test1 (f1 string) PARTITIONED BY (ds STRING, hr 
 STRING);
 CREATE TABLE exchange_part_test2 (f1 string) PARTITIONED BY (hr STRING)
 ALTER TABLE exchange_part_test1 EXCHANGE PARTITION (ds='2013-04-05') WITH 
 TABLE exchange_part_test2;
 {noformat}
 can be possible.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Review Request 19865: Support partial partition exchange

2014-04-01 Thread Navis Ryu

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19865/
---

Review request for hive.


Bugs: HIVE-6133
https://issues.apache.org/jira/browse/HIVE-6133


Repository: hive-git


Description
---

Current alter exchange coerces source and destination table to have same 
partition columns. But source table has sub-set of partitions and provided 
partition spec supplements to be a complete partition spec, it need not to be 
that.

For example, 
{noformat}
CREATE TABLE exchange_part_test1 (f1 string) PARTITIONED BY (ds STRING);
CREATE TABLE exchange_part_test2 (f1 string) 
ALTER TABLE exchange_part_test1 EXCHANGE PARTITION (ds='2013-04-05') WITH TABLE 
exchange_part_test2;
{noformat}

or 

{noformat}
CREATE TABLE exchange_part_test1 (f1 string) PARTITIONED BY (ds STRING, hr 
STRING);
CREATE TABLE exchange_part_test2 (f1 string) PARTITIONED BY (hr STRING)
ALTER TABLE exchange_part_test1 EXCHANGE PARTITION (ds='2013-04-05') WITH TABLE 
exchange_part_test2;
{noformat}

can be possible.


Diffs
-

  metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 
844e07c 
  metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java 
1bbe02e 
  ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 117295a 
  ql/src/test/queries/clientnegative/exchange_partition_neg_partial_match1.q 
PRE-CREATION 
  ql/src/test/queries/clientnegative/exchange_partition_neg_partial_match2.q 
PRE-CREATION 
  ql/src/test/queries/clientnegative/exchange_partition_neg_partial_match3.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/exchange_partition_partial_match1.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/exchange_partition_partial_match2.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/exchange_partition_partial_match3.q 
PRE-CREATION 
  ql/src/test/queries/clientpositive/exchange_partition_partial_match4.q 
PRE-CREATION 
  
ql/src/test/results/clientnegative/exchange_partition_neg_partial_match1.q.out 
PRE-CREATION 
  
ql/src/test/results/clientnegative/exchange_partition_neg_partial_match2.q.out 
PRE-CREATION 
  
ql/src/test/results/clientnegative/exchange_partition_neg_partial_match3.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/exchange_partition_partial_match1.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/exchange_partition_partial_match2.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/exchange_partition_partial_match3.q.out 
PRE-CREATION 
  ql/src/test/results/clientpositive/exchange_partition_partial_match4.q.out 
PRE-CREATION 

Diff: https://reviews.apache.org/r/19865/diff/


Testing
---


Thanks,

Navis Ryu



[jira] [Commented] (HIVE-6802) Fix metastore.thrift: add partition_columns.types constant

2014-04-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956354#comment-13956354
 ] 

Hive QA commented on HIVE-6802:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12637968/HIVE-6802.1.patch

{color:green}SUCCESS:{color} +1 5513 tests passed

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2062/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2062/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12637968

 Fix metastore.thrift: add partition_columns.types constant
 --

 Key: HIVE-6802
 URL: https://issues.apache.org/jira/browse/HIVE-6802
 Project: Hive
  Issue Type: Bug
Reporter: Harish Butani
Assignee: Harish Butani
 Attachments: HIVE-6802.1.patch


 HIVE-6642 edited the hive_metastoreConstants.java genned file. 
 Need to add constant to thrift file and regen thrift classes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-5998) Add vectorized reader for Parquet files

2014-04-01 Thread Remus Rusanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remus Rusanu updated HIVE-5998:
---

Attachment: HIVE-5998.12.patch

Not my best day... forgot to say --no-prefix on .11.patch

 Add vectorized reader for Parquet files
 ---

 Key: HIVE-5998
 URL: https://issues.apache.org/jira/browse/HIVE-5998
 Project: Hive
  Issue Type: Sub-task
  Components: Serializers/Deserializers, Vectorization
Reporter: Remus Rusanu
Assignee: Remus Rusanu
Priority: Minor
  Labels: Parquet, vectorization
 Attachments: HIVE-5998.1.patch, HIVE-5998.10.patch, 
 HIVE-5998.11.patch, HIVE-5998.12.patch, HIVE-5998.2.patch, HIVE-5998.3.patch, 
 HIVE-5998.4.patch, HIVE-5998.5.patch, HIVE-5998.6.patch, HIVE-5998.7.patch, 
 HIVE-5998.8.patch, HIVE-5998.9.patch


 HIVE-5783 is adding native Parquet support in Hive. As Parquet is a columnar 
 format, it makes sense to provide a vectorized reader, similar to how RC and 
 ORC formats have, to benefit from vectorized execution engine.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6411) Support more generic way of using composite key for HBaseHandler

2014-04-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956456#comment-13956456
 ] 

Hive QA commented on HIVE-6411:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12637969/HIVE-6411.8.patch.txt

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5515 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_auto_sortmerge_join_16
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2063/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2063/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12637969

 Support more generic way of using composite key for HBaseHandler
 

 Key: HIVE-6411
 URL: https://issues.apache.org/jira/browse/HIVE-6411
 Project: Hive
  Issue Type: Improvement
  Components: HBase Handler
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: HIVE-6411.1.patch.txt, HIVE-6411.2.patch.txt, 
 HIVE-6411.3.patch.txt, HIVE-6411.4.patch.txt, HIVE-6411.5.patch.txt, 
 HIVE-6411.6.patch.txt, HIVE-6411.7.patch.txt, HIVE-6411.8.patch.txt


 HIVE-2599 introduced using custom object for the row key. But it forces key 
 objects to extend HBaseCompositeKey, which is again extension of LazyStruct. 
 If user provides proper Object and OI, we can replace internal key and keyOI 
 with those. 
 Initial implementation is based on factory interface.
 {code}
 public interface HBaseKeyFactory {
   void init(SerDeParameters parameters, Properties properties) throws 
 SerDeException;
   ObjectInspector createObjectInspector(TypeInfo type) throws SerDeException;
   LazyObjectBase createObject(ObjectInspector inspector) throws 
 SerDeException;
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6799) HiveServer2 needs to map kerberos name to local name before proxy check

2014-04-01 Thread Dilli Arumugam (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956483#comment-13956483
 ] 

Dilli Arumugam commented on HIVE-6799:
--

[~vaibhavgumashta]
Your observations is right - the problem is with principal name of the form 
serviceName/h...@realm.com, which would be typically be another service.

 HiveServer2 needs to map kerberos name to local name before proxy check
 ---

 Key: HIVE-6799
 URL: https://issues.apache.org/jira/browse/HIVE-6799
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Reporter: Dilli Arumugam
Assignee: Dilli Arumugam

 HiveServer2 does not map kerberos name of authenticated principal to local 
 name.
 Due to this, I get error like the following in HiveServer log:
 Failed to validate proxy privilage of knox/hdps.example.com for sam
 I have KINITED as knox/hdps.example@example.com
 I do have the following in core-site.xml
   property
 namehadoop.proxyuser.knox.groups/name
 valueusers/value
   /property
   property
 namehadoop.proxyuser.knox.hosts/name
 value*/value
   /property



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6800) HiveServer2 is not passing proxy user setting through hive-site

2014-04-01 Thread Prasad Mujumdar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956519#comment-13956519
 ] 

Prasad Mujumdar commented on HIVE-6800:
---

[~vaibhavgumashta] Thanks for fixing the issue. Looks fine to me.
+1


 HiveServer2 is not passing proxy user setting through hive-site
 ---

 Key: HIVE-6800
 URL: https://issues.apache.org/jira/browse/HIVE-6800
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.13.0
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.13.0

 Attachments: HIVE-6800.1.patch


 Setting the following in core-site.xml works fine in a secure cluster with 
 hive.server2.allow.user.substitution set to true:
 {code}
 property
   namehadoop.proxyuser.user1.groups/name
   valueusers/value
 /property
 
 property
   namehadoop.proxyuser.user1.hosts/name
   value*/value
 /property
 {code}
 where user1 will be proxying for user2:
 {code}
 !connect 
 jdbc:hive2:/myhostname:1/;principal=hive/_h...@example.com;hive.server2.proxy.user=user2
  user1 fakepwd org.apache.hive.jdbc.HiveDriver
 {code}
 However, setting this in hive-site.xml throws Failed to validate proxy 
 privilage exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6329) Support column level encryption/decryption

2014-04-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956549#comment-13956549
 ] 

Hive QA commented on HIVE-6329:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12637978/HIVE-6329.8.patch.txt

{color:green}SUCCESS:{color} +1 5515 tests passed

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2064/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2064/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12637978

 Support column level encryption/decryption
 --

 Key: HIVE-6329
 URL: https://issues.apache.org/jira/browse/HIVE-6329
 Project: Hive
  Issue Type: New Feature
  Components: Security, Serializers/Deserializers
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: HIVE-6329.1.patch.txt, HIVE-6329.2.patch.txt, 
 HIVE-6329.3.patch.txt, HIVE-6329.4.patch.txt, HIVE-6329.5.patch.txt, 
 HIVE-6329.6.patch.txt, HIVE-6329.7.patch.txt, HIVE-6329.8.patch.txt


 Receiving some requirements on encryption recently but hive is not supporting 
 it. Before the full implementation via HIVE-5207, this might be useful for 
 some cases.
 {noformat}
 hive create table encode_test(id int, name STRING, phone STRING, address 
 STRING) 
  ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' 
  WITH SERDEPROPERTIES ('column.encode.indices'='2,3', 
 'column.encode.classname'='org.apache.hadoop.hive.serde2.Base64WriteOnly') 
 STORED AS TEXTFILE;
 OK
 Time taken: 0.584 seconds
 hive insert into table encode_test select 
 100,'navis','010--','Seoul, Seocho' from src tablesample (1 rows);
 ..
 OK
 Time taken: 5.121 seconds
 hive select * from encode_test;
 OK
 100   navis MDEwLTAwMDAtMDAwMA==  U2VvdWwsIFNlb2Nobw==
 Time taken: 0.078 seconds, Fetched: 1 row(s)
 hive 
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6789) HiveStatement client transport lock should unlock in finally block.

2014-04-01 Thread Harish Butani (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956561#comment-13956561
 ] 

Harish Butani commented on HIVE-6789:
-

+1 for .13

 HiveStatement client transport lock should unlock in finally block.
 ---

 Key: HIVE-6789
 URL: https://issues.apache.org/jira/browse/HIVE-6789
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.13.0
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.13.0

 Attachments: HIVE-6789.1.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6804) sql std auth - granting existing table privilege to owner should result in error

2014-04-01 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956565#comment-13956565
 ] 

Ashutosh Chauhan commented on HIVE-6804:


+1

 sql std auth - granting existing table privilege to owner should result in 
 error
 

 Key: HIVE-6804
 URL: https://issues.apache.org/jira/browse/HIVE-6804
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Reporter: Deepesh Khandelwal
Assignee: Thejas M Nair
 Attachments: HIVE-6804.1.patch


 Table owner gets all privileges on the table at the time of table creation.
 But granting some or all of the privileges using grant statement still works 
 resulting in duplicate privileges. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6804) sql std auth - granting existing table privilege to owner should result in error

2014-04-01 Thread Harish Butani (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956566#comment-13956566
 ] 

Harish Butani commented on HIVE-6804:
-

+1 for 0.13

 sql std auth - granting existing table privilege to owner should result in 
 error
 

 Key: HIVE-6804
 URL: https://issues.apache.org/jira/browse/HIVE-6804
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Reporter: Deepesh Khandelwal
Assignee: Thejas M Nair
 Attachments: HIVE-6804.1.patch


 Table owner gets all privileges on the table at the time of table creation.
 But granting some or all of the privileges using grant statement still works 
 resulting in duplicate privileges. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6795) metastore initialization should add default roles with default, SBA

2014-04-01 Thread Harish Butani (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956564#comment-13956564
 ] 

Harish Butani commented on HIVE-6795:
-

+1 for 0.13

 metastore initialization should add default roles with default, SBA
 ---

 Key: HIVE-6795
 URL: https://issues.apache.org/jira/browse/HIVE-6795
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Affects Versions: 0.13.0
Reporter: Deepesh Khandelwal
Assignee: Thejas M Nair
 Attachments: HIVE-6795.1.patch


 Hiveserver2 running sql standard authorization can connect to a metastore 
 running storage based authorization. Currently metastore is not adding the 
 standard roles to the db in such cases.
 It would be better to add them in these cases as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6802) Fix metastore.thrift: add partition_columns.types constant

2014-04-01 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-6802:


Fix Version/s: 0.13.0

 Fix metastore.thrift: add partition_columns.types constant
 --

 Key: HIVE-6802
 URL: https://issues.apache.org/jira/browse/HIVE-6802
 Project: Hive
  Issue Type: Bug
Reporter: Harish Butani
Assignee: Harish Butani
 Fix For: 0.13.0

 Attachments: HIVE-6802.1.patch


 HIVE-6642 edited the hive_metastoreConstants.java genned file. 
 Need to add constant to thrift file and regen thrift classes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6802) Fix metastore.thrift: add partition_columns.types constant

2014-04-01 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-6802:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk and 0.13
thanks Jason, Sergey, Hari for reviewing

 Fix metastore.thrift: add partition_columns.types constant
 --

 Key: HIVE-6802
 URL: https://issues.apache.org/jira/browse/HIVE-6802
 Project: Hive
  Issue Type: Bug
Reporter: Harish Butani
Assignee: Harish Butani
 Fix For: 0.13.0

 Attachments: HIVE-6802.1.patch


 HIVE-6642 edited the hive_metastoreConstants.java genned file. 
 Need to add constant to thrift file and regen thrift classes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6795) metastore initialization should add default roles with default, SBA

2014-04-01 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6795:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk  0.13

 metastore initialization should add default roles with default, SBA
 ---

 Key: HIVE-6795
 URL: https://issues.apache.org/jira/browse/HIVE-6795
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Affects Versions: 0.13.0
Reporter: Deepesh Khandelwal
Assignee: Thejas M Nair
 Fix For: 0.13.0

 Attachments: HIVE-6795.1.patch


 Hiveserver2 running sql standard authorization can connect to a metastore 
 running storage based authorization. Currently metastore is not adding the 
 standard roles to the db in such cases.
 It would be better to add them in these cases as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6793) DDLSemanticAnalyzer.analyzeShowRoles() should use HiveAuthorizationTaskFactory

2014-04-01 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6793:
---

   Resolution: Fixed
Fix Version/s: 0.14.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Xuefu!

 DDLSemanticAnalyzer.analyzeShowRoles() should use HiveAuthorizationTaskFactory
 --

 Key: HIVE-6793
 URL: https://issues.apache.org/jira/browse/HIVE-6793
 Project: Hive
  Issue Type: Bug
  Components: Authorization, Query Processor
Affects Versions: 0.13.0
Reporter: Xuefu Zhang
Assignee: Xuefu Zhang
 Fix For: 0.14.0

 Attachments: HIVE-6793.patch


 Currently DDLSemanticAnalyzer.analyzeShowRoles() isn't using 
 HiveAuthorizationTaskFactory to create task, at odds with other Authorization 
 related task creations such as for analyzeShowRolePrincipals(). This JIRA is 
 to make it consistent.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6766) HCatLoader always returns Char datatype with maxlength(255) when table format is ORC

2014-04-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956715#comment-13956715
 ] 

Hive QA commented on HIVE-6766:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12637984/HIVE-6766.1.patch

{color:green}SUCCESS:{color} +1 5539 tests passed

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2066/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2066/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12637984

 HCatLoader always returns Char datatype with maxlength(255)  when table 
 format is ORC
 -

 Key: HIVE-6766
 URL: https://issues.apache.org/jira/browse/HIVE-6766
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.13.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
Priority: Critical
 Attachments: HIVE-6766.1.patch, HIVE-6766.patch


 attached patch contains
 org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer#testWriteChar()
 which shows that char(5) value written to Hive (ORC) table using HCatStorer 
 will come back as char(255) when read with HCatLoader.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6800) HiveServer2 is not passing proxy user setting through hive-site

2014-04-01 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956740#comment-13956740
 ] 

Vaibhav Gumashta commented on HIVE-6800:


[~prasadm] Thanks for taking a look. The failure looks unrelated.

 HiveServer2 is not passing proxy user setting through hive-site
 ---

 Key: HIVE-6800
 URL: https://issues.apache.org/jira/browse/HIVE-6800
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.13.0
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.13.0

 Attachments: HIVE-6800.1.patch


 Setting the following in core-site.xml works fine in a secure cluster with 
 hive.server2.allow.user.substitution set to true:
 {code}
 property
   namehadoop.proxyuser.user1.groups/name
   valueusers/value
 /property
 
 property
   namehadoop.proxyuser.user1.hosts/name
   value*/value
 /property
 {code}
 where user1 will be proxying for user2:
 {code}
 !connect 
 jdbc:hive2:/myhostname:1/;principal=hive/_h...@example.com;hive.server2.proxy.user=user2
  user1 fakepwd org.apache.hive.jdbc.HiveDriver
 {code}
 However, setting this in hive-site.xml throws Failed to validate proxy 
 privilage exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6788) Abandoned opened transactions not being timed out

2014-04-01 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-6788:
-

Attachment: HIVE-6788.patch

This patch adds logic to getOpenTxns to check for any abandoned transactions 
and move them from open to aborted before returning the list of open 
transactions.

 Abandoned opened transactions not being timed out
 -

 Key: HIVE-6788
 URL: https://issues.apache.org/jira/browse/HIVE-6788
 Project: Hive
  Issue Type: Bug
  Components: Locking
Affects Versions: 0.13.0
Reporter: Alan Gates
Assignee: Alan Gates
 Attachments: HIVE-6788.patch


 If a client abandons an open transaction it is never closed.  This does not 
 cause any immediate problems (as locks are timed out) but it will eventually 
 lead to high levels of open transactions in the lists that readers need to be 
 aware of when reading tables or partitions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6788) Abandoned opened transactions not being timed out

2014-04-01 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates updated HIVE-6788:
-

Status: Patch Available  (was: Open)

 Abandoned opened transactions not being timed out
 -

 Key: HIVE-6788
 URL: https://issues.apache.org/jira/browse/HIVE-6788
 Project: Hive
  Issue Type: Bug
  Components: Locking
Affects Versions: 0.13.0
Reporter: Alan Gates
Assignee: Alan Gates
 Attachments: HIVE-6788.patch


 If a client abandons an open transaction it is never closed.  This does not 
 cause any immediate problems (as locks are timed out) but it will eventually 
 lead to high levels of open transactions in the lists that readers need to be 
 aware of when reading tables or partitions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6783) Incompatible schema for maps between parquet-hive and parquet-pig

2014-04-01 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956820#comment-13956820
 ] 

Xuefu Zhang commented on HIVE-6783:
---

+1

 Incompatible schema for maps between parquet-hive and parquet-pig
 -

 Key: HIVE-6783
 URL: https://issues.apache.org/jira/browse/HIVE-6783
 Project: Hive
  Issue Type: Bug
  Components: File Formats
Affects Versions: 0.13.0
Reporter: Tongjie Chen
 Fix For: 0.13.0

 Attachments: HIVE-6783.1.patch.txt, HIVE-6783.2.patch.txt, 
 HIVE-6783.3.patch.txt, HIVE-6783.4.patch.txt


 see also in following parquet issue:
 https://github.com/Parquet/parquet-mr/issues/290
 The schema written for maps isn't compatible between hive and pig. This means 
 any files written in one cannot be properly read in the other.
 More specifically,  for the same map column c1, parquet-pig generates schema:
 message pig_schema {
   optional group c1 (MAP) {
 repeated group map (MAP_KEY_VALUE) {
   required binary key (UTF8);
   optional binary value;
 }   
   }
 }
 while parquet-hive generates schema:
 message hive_schema {
optional group c1 (MAP_KEY_VALUE) {
  repeated group map {
required binary key;
optional binary value;
}
  }
 }



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6806) Native Avro support in Hive

2014-04-01 Thread Jeremy Beard (JIRA)
Jeremy Beard created HIVE-6806:
--

 Summary: Native Avro support in Hive
 Key: HIVE-6806
 URL: https://issues.apache.org/jira/browse/HIVE-6806
 Project: Hive
  Issue Type: New Feature
  Components: Serializers/Deserializers
Affects Versions: 0.12.0
Reporter: Jeremy Beard
Priority: Minor


Avro is well established and widely used within Hive, however creating 
Avro-backed tables requires the messy listing of the SerDe, InputFormat and 
OutputFormat classes.

Similarly to HIVE-5783 for Parquet, Hive would be easier to use if it had 
native Avro support.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 18179: Support more generic way of using composite key for HBaseHandler

2014-04-01 Thread Xuefu Zhang

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18179/#review39179
---


- Xuefu Zhang


On April 1, 2014, 12:59 a.m., Navis Ryu wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/18179/
 ---
 
 (Updated April 1, 2014, 12:59 a.m.)
 
 
 Review request for hive.
 
 
 Bugs: HIVE-6411
 https://issues.apache.org/jira/browse/HIVE-6411
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 HIVE-2599 introduced using custom object for the row key. But it forces key 
 objects to extend HBaseCompositeKey, which is again extension of LazyStruct. 
 If user provides proper Object and OI, we can replace internal key and keyOI 
 with those. 
 
 Initial implementation is based on factory interface.
 {code}
 public interface HBaseKeyFactory {
   void init(SerDeParameters parameters, Properties properties) throws 
 SerDeException;
   ObjectInspector createObjectInspector(TypeInfo type) throws SerDeException;
   LazyObjectBase createObject(ObjectInspector inspector) throws 
 SerDeException;
 }
 {code}
 
 
 Diffs
 -
 
   hbase-handler/pom.xml 132af43 
   hbase-handler/src/java/org/apache/hadoop/hive/hbase/ColumnMappings.java 
 PRE-CREATION 
   
 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseAbstractKeyFactory.java
  PRE-CREATION 
   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseCompositeKey.java 
 5008f15 
   
 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseCompositeKeyFactory.java
  PRE-CREATION 
   
 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseDefaultKeyFactory.java
  PRE-CREATION 
   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseKeyFactory.java 
 PRE-CREATION 
   
 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseLazyObjectFactory.java
  PRE-CREATION 
   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseRowSerializer.java 
 PRE-CREATION 
   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseScanRange.java 
 PRE-CREATION 
   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDe.java 5fe35a5 
   
 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDeParameters.java 
 b64590d 
   
 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java 
 4fe1b1b 
   
 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableInputFormat.java
  142bfd8 
   hbase-handler/src/java/org/apache/hadoop/hive/hbase/LazyHBaseRow.java 
 fc40195 
   
 hbase-handler/src/test/org/apache/hadoop/hive/hbase/HBaseTestCompositeKey.java
  13c344b 
   
 hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseKeyFactory.java 
 PRE-CREATION 
   
 hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseKeyFactory2.java 
 PRE-CREATION 
   
 hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestLazyHBaseObject.java 
 7c4fc9f 
   hbase-handler/src/test/queries/positive/hbase_custom_key.q PRE-CREATION 
   hbase-handler/src/test/queries/positive/hbase_custom_key2.q PRE-CREATION 
   hbase-handler/src/test/results/positive/hbase_custom_key.q.out PRE-CREATION 
   hbase-handler/src/test/results/positive/hbase_custom_key2.q.out 
 PRE-CREATION 
   itests/util/pom.xml e9720df 
   ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java e52d364 
   ql/src/java/org/apache/hadoop/hive/ql/index/IndexPredicateAnalyzer.java 
 d39ee2e 
   ql/src/java/org/apache/hadoop/hive/ql/index/IndexSearchCondition.java 
 5f1329c 
   ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java f0c0ecf 
   
 ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveStoragePredicateHandler.java
  9f35575 
   ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeDescUtils.java e50026b 
   ql/src/java/org/apache/hadoop/hive/ql/plan/TableScanDesc.java ecb82d7 
   ql/src/java/org/apache/hadoop/hive/ql/ppd/OpProcFactory.java c0a8269 
   serde/src/java/org/apache/hadoop/hive/serde2/StructObject.java PRE-CREATION 
   serde/src/java/org/apache/hadoop/hive/serde2/StructObjectBaseInspector.java 
 PRE-CREATION 
   
 serde/src/java/org/apache/hadoop/hive/serde2/columnar/ColumnarStructBase.java 
 1fd6853 
   serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyObject.java 10f4c05 
   serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyObjectBase.java 
 3334dff 
   serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazySimpleSerDe.java 
 82c1263 
   serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyStruct.java 8a1ea46 
   
 serde/src/java/org/apache/hadoop/hive/serde2/lazy/objectinspector/LazySimpleStructObjectInspector.java
  8a5386a 
   
 serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinaryObject.java 
 598683f 
   
 serde/src/java/org/apache/hadoop/hive/serde2/lazybinary/LazyBinaryStruct.java 
 caf3517 
 
 Diff: 

[jira] [Commented] (HIVE-6766) HCatLoader always returns Char datatype with maxlength(255) when table format is ORC

2014-04-01 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956840#comment-13956840
 ] 

Thejas M Nair commented on HIVE-6766:
-

[~rhbutani] This is a very useful bug fix to have in hive 0.13 .


 HCatLoader always returns Char datatype with maxlength(255)  when table 
 format is ORC
 -

 Key: HIVE-6766
 URL: https://issues.apache.org/jira/browse/HIVE-6766
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.13.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
Priority: Critical
 Attachments: HIVE-6766.1.patch, HIVE-6766.patch


 attached patch contains
 org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer#testWriteChar()
 which shows that char(5) value written to Hive (ORC) table using HCatStorer 
 will come back as char(255) when read with HCatLoader.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 18179: Support more generic way of using composite key for HBaseHandler

2014-04-01 Thread Xuefu Zhang


 On March 25, 2014, 6:38 p.m., Xuefu Zhang wrote:
  hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseAbstractKeyFactory.java,
   line 31
  https://reviews.apache.org/r/18179/diff/6/?file=535090#file535090line31
 
  Do you think AbstractHBaseKeyFactory is slightly better?
 
 Navis Ryu wrote:
 Yes, it's conventionally better name. But I wanted related things 
 adjacent to each other. You don't like it?

Not that I like it or not. AbstractHBaseKeyFactory sounds a little less 
confusion and it seems more pertaining java class naming convention. For 
instance, there is a java class called AbstractExecutorService rather than 
ExecutorAbstractService. This is just my personal view, of course.


- Xuefu


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18179/#review38465
---


On April 1, 2014, 12:59 a.m., Navis Ryu wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/18179/
 ---
 
 (Updated April 1, 2014, 12:59 a.m.)
 
 
 Review request for hive.
 
 
 Bugs: HIVE-6411
 https://issues.apache.org/jira/browse/HIVE-6411
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 HIVE-2599 introduced using custom object for the row key. But it forces key 
 objects to extend HBaseCompositeKey, which is again extension of LazyStruct. 
 If user provides proper Object and OI, we can replace internal key and keyOI 
 with those. 
 
 Initial implementation is based on factory interface.
 {code}
 public interface HBaseKeyFactory {
   void init(SerDeParameters parameters, Properties properties) throws 
 SerDeException;
   ObjectInspector createObjectInspector(TypeInfo type) throws SerDeException;
   LazyObjectBase createObject(ObjectInspector inspector) throws 
 SerDeException;
 }
 {code}
 
 
 Diffs
 -
 
   hbase-handler/pom.xml 132af43 
   hbase-handler/src/java/org/apache/hadoop/hive/hbase/ColumnMappings.java 
 PRE-CREATION 
   
 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseAbstractKeyFactory.java
  PRE-CREATION 
   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseCompositeKey.java 
 5008f15 
   
 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseCompositeKeyFactory.java
  PRE-CREATION 
   
 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseDefaultKeyFactory.java
  PRE-CREATION 
   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseKeyFactory.java 
 PRE-CREATION 
   
 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseLazyObjectFactory.java
  PRE-CREATION 
   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseRowSerializer.java 
 PRE-CREATION 
   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseScanRange.java 
 PRE-CREATION 
   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDe.java 5fe35a5 
   
 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDeParameters.java 
 b64590d 
   
 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java 
 4fe1b1b 
   
 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableInputFormat.java
  142bfd8 
   hbase-handler/src/java/org/apache/hadoop/hive/hbase/LazyHBaseRow.java 
 fc40195 
   
 hbase-handler/src/test/org/apache/hadoop/hive/hbase/HBaseTestCompositeKey.java
  13c344b 
   
 hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseKeyFactory.java 
 PRE-CREATION 
   
 hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseKeyFactory2.java 
 PRE-CREATION 
   
 hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestLazyHBaseObject.java 
 7c4fc9f 
   hbase-handler/src/test/queries/positive/hbase_custom_key.q PRE-CREATION 
   hbase-handler/src/test/queries/positive/hbase_custom_key2.q PRE-CREATION 
   hbase-handler/src/test/results/positive/hbase_custom_key.q.out PRE-CREATION 
   hbase-handler/src/test/results/positive/hbase_custom_key2.q.out 
 PRE-CREATION 
   itests/util/pom.xml e9720df 
   ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java e52d364 
   ql/src/java/org/apache/hadoop/hive/ql/index/IndexPredicateAnalyzer.java 
 d39ee2e 
   ql/src/java/org/apache/hadoop/hive/ql/index/IndexSearchCondition.java 
 5f1329c 
   ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java f0c0ecf 
   
 ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveStoragePredicateHandler.java
  9f35575 
   ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeDescUtils.java e50026b 
   ql/src/java/org/apache/hadoop/hive/ql/plan/TableScanDesc.java ecb82d7 
   ql/src/java/org/apache/hadoop/hive/ql/ppd/OpProcFactory.java c0a8269 
   serde/src/java/org/apache/hadoop/hive/serde2/StructObject.java PRE-CREATION 
   serde/src/java/org/apache/hadoop/hive/serde2/StructObjectBaseInspector.java 
 PRE-CREATION 
   
 

[jira] [Commented] (HIVE-6797) Add protection against divide by zero in stats annotation

2014-04-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956846#comment-13956846
 ] 

Hive QA commented on HIVE-6797:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12638001/HIVE-6797.2.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 5513 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testNegativeCliDriver_mapreduce_stack_trace_hadoop20
org.apache.hcatalog.pig.TestHCatStorerMulti.testStoreBasicTable
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2068/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2068/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12638001

 Add protection against divide by zero in stats annotation
 -

 Key: HIVE-6797
 URL: https://issues.apache.org/jira/browse/HIVE-6797
 Project: Hive
  Issue Type: Sub-task
  Components: Query Processor, Statistics
Affects Versions: 0.13.0
Reporter: Prasanth J
Assignee: Prasanth J
 Fix For: 0.13.0

 Attachments: HIVE-6797.1.patch, HIVE-6797.2.patch


 In stats annotation, the denominator computation in join operator is not 
 protected for divide by zero exception. It will be an issue when NDV (count 
 distinct) updated by updateStats() becomes 0. This patch adds protection in 
 updateStats() method to prevent divide-by-zero in downstream operators.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 18179: Support more generic way of using composite key for HBaseHandler

2014-04-01 Thread Swarnim Kulkarni

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/18179/#review38496
---



hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseAbstractKeyFactory.java
https://reviews.apache.org/r/18179/#comment70702

+1. I agree. Just by looking at the name, HBaseAbstractKeyFactory sounds 
like it's some kind of HBase specific extension on an AbstractKeyFactory rather 
than an extension of HBaseKeyFactory.


- Swarnim Kulkarni


On April 1, 2014, 12:59 a.m., Navis Ryu wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/18179/
 ---
 
 (Updated April 1, 2014, 12:59 a.m.)
 
 
 Review request for hive.
 
 
 Bugs: HIVE-6411
 https://issues.apache.org/jira/browse/HIVE-6411
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 HIVE-2599 introduced using custom object for the row key. But it forces key 
 objects to extend HBaseCompositeKey, which is again extension of LazyStruct. 
 If user provides proper Object and OI, we can replace internal key and keyOI 
 with those. 
 
 Initial implementation is based on factory interface.
 {code}
 public interface HBaseKeyFactory {
   void init(SerDeParameters parameters, Properties properties) throws 
 SerDeException;
   ObjectInspector createObjectInspector(TypeInfo type) throws SerDeException;
   LazyObjectBase createObject(ObjectInspector inspector) throws 
 SerDeException;
 }
 {code}
 
 
 Diffs
 -
 
   hbase-handler/pom.xml 132af43 
   hbase-handler/src/java/org/apache/hadoop/hive/hbase/ColumnMappings.java 
 PRE-CREATION 
   
 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseAbstractKeyFactory.java
  PRE-CREATION 
   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseCompositeKey.java 
 5008f15 
   
 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseCompositeKeyFactory.java
  PRE-CREATION 
   
 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseDefaultKeyFactory.java
  PRE-CREATION 
   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseKeyFactory.java 
 PRE-CREATION 
   
 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseLazyObjectFactory.java
  PRE-CREATION 
   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseRowSerializer.java 
 PRE-CREATION 
   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseScanRange.java 
 PRE-CREATION 
   hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDe.java 5fe35a5 
   
 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseSerDeParameters.java 
 b64590d 
   
 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java 
 4fe1b1b 
   
 hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableInputFormat.java
  142bfd8 
   hbase-handler/src/java/org/apache/hadoop/hive/hbase/LazyHBaseRow.java 
 fc40195 
   
 hbase-handler/src/test/org/apache/hadoop/hive/hbase/HBaseTestCompositeKey.java
  13c344b 
   
 hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseKeyFactory.java 
 PRE-CREATION 
   
 hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestHBaseKeyFactory2.java 
 PRE-CREATION 
   
 hbase-handler/src/test/org/apache/hadoop/hive/hbase/TestLazyHBaseObject.java 
 7c4fc9f 
   hbase-handler/src/test/queries/positive/hbase_custom_key.q PRE-CREATION 
   hbase-handler/src/test/queries/positive/hbase_custom_key2.q PRE-CREATION 
   hbase-handler/src/test/results/positive/hbase_custom_key.q.out PRE-CREATION 
   hbase-handler/src/test/results/positive/hbase_custom_key2.q.out 
 PRE-CREATION 
   itests/util/pom.xml e9720df 
   ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java e52d364 
   ql/src/java/org/apache/hadoop/hive/ql/index/IndexPredicateAnalyzer.java 
 d39ee2e 
   ql/src/java/org/apache/hadoop/hive/ql/index/IndexSearchCondition.java 
 5f1329c 
   ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java f0c0ecf 
   
 ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveStoragePredicateHandler.java
  9f35575 
   ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeDescUtils.java e50026b 
   ql/src/java/org/apache/hadoop/hive/ql/plan/TableScanDesc.java ecb82d7 
   ql/src/java/org/apache/hadoop/hive/ql/ppd/OpProcFactory.java c0a8269 
   serde/src/java/org/apache/hadoop/hive/serde2/StructObject.java PRE-CREATION 
   serde/src/java/org/apache/hadoop/hive/serde2/StructObjectBaseInspector.java 
 PRE-CREATION 
   
 serde/src/java/org/apache/hadoop/hive/serde2/columnar/ColumnarStructBase.java 
 1fd6853 
   serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyObject.java 10f4c05 
   serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyObjectBase.java 
 3334dff 
   serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazySimpleSerDe.java 
 82c1263 
   serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyStruct.java 8a1ea46 
   
 

[jira] [Updated] (HIVE-6394) Implement Timestmap in ParquetSerde

2014-04-01 Thread Szehon Ho (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szehon Ho updated HIVE-6394:


Assignee: Szehon Ho

I'll take a look at this issue, there has been a decision by the parquet 
community of the data type to use.

[https://github.com/Parquet/parquet-mr/issues/218|https://github.com/Parquet/parquet-mr/issues/218]

 Implement Timestmap in ParquetSerde
 ---

 Key: HIVE-6394
 URL: https://issues.apache.org/jira/browse/HIVE-6394
 Project: Hive
  Issue Type: Sub-task
  Components: Serializers/Deserializers
Reporter: Jarek Jarcec Cecho
Assignee: Szehon Ho
  Labels: Parquet

 This JIRA is to implement timestamp support in Parquet SerDe.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6778) ql/src/test/queries/clientpositive/pcr.q covers the test which generate 1.0 =1 predicate in partition pruner.

2014-04-01 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956869#comment-13956869
 ] 

Jitendra Nath Pandey commented on HIVE-6778:


+1

 ql/src/test/queries/clientpositive/pcr.q covers the test which generate 1.0 
 =1 predicate in partition pruner. 
 --

 Key: HIVE-6778
 URL: https://issues.apache.org/jira/browse/HIVE-6778
 Project: Hive
  Issue Type: Bug
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Harish Butani
 Attachments: HIVE-6778.1.patch


 select key, value, ds from pcr_foo where (ds % 2 == 1);
 ql/src/test/queries/clientpositive/pcr.q
 The test generates 1.0==1 predicate in the pruner which cannot be evaluated 
 since a double cannot be converted to int.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5376) Hive does not honor type for partition columns when altering column type

2014-04-01 Thread Vikram Dixit K (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956905#comment-13956905
 ] 

Vikram Dixit K commented on HIVE-5376:
--

[~hsubramaniyan] I am not currently working on it. Please go ahead and assign 
it to yourself if you are working on it.

 Hive does not honor type for partition columns when altering column type
 

 Key: HIVE-5376
 URL: https://issues.apache.org/jira/browse/HIVE-5376
 Project: Hive
  Issue Type: Bug
  Components: CLI
Reporter: Sergey Shelukhin
Assignee: Vikram Dixit K

 Followup for HIVE-5297. If partition column of type string is changed to int, 
 the data is not verified. The values for partition columns are all in 
 metastore db, so it's easy to check and fail the type change.
 alter_partition_coltype.q (or some other test?) checks this behavior right 
 now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6778) ql/src/test/queries/clientpositive/pcr.q covers the test which generate 1.0 =1 predicate in partition pruner.

2014-04-01 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-6778:


Status: Patch Available  (was: Open)

 ql/src/test/queries/clientpositive/pcr.q covers the test which generate 1.0 
 =1 predicate in partition pruner. 
 --

 Key: HIVE-6778
 URL: https://issues.apache.org/jira/browse/HIVE-6778
 Project: Hive
  Issue Type: Bug
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Harish Butani
 Attachments: HIVE-6778.1.patch


 select key, value, ds from pcr_foo where (ds % 2 == 1);
 ql/src/test/queries/clientpositive/pcr.q
 The test generates 1.0==1 predicate in the pruner which cannot be evaluated 
 since a double cannot be converted to int.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5998) Add vectorized reader for Parquet files

2014-04-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956989#comment-13956989
 ] 

Hive QA commented on HIVE-5998:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12638055/HIVE-5998.12.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5514 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_infer_bucket_sort_bucketed_table
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2069/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2069/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12638055

 Add vectorized reader for Parquet files
 ---

 Key: HIVE-5998
 URL: https://issues.apache.org/jira/browse/HIVE-5998
 Project: Hive
  Issue Type: Sub-task
  Components: Serializers/Deserializers, Vectorization
Reporter: Remus Rusanu
Assignee: Remus Rusanu
Priority: Minor
  Labels: Parquet, vectorization
 Attachments: HIVE-5998.1.patch, HIVE-5998.10.patch, 
 HIVE-5998.11.patch, HIVE-5998.12.patch, HIVE-5998.2.patch, HIVE-5998.3.patch, 
 HIVE-5998.4.patch, HIVE-5998.5.patch, HIVE-5998.6.patch, HIVE-5998.7.patch, 
 HIVE-5998.8.patch, HIVE-5998.9.patch


 HIVE-5783 is adding native Parquet support in Hive. As Parquet is a columnar 
 format, it makes sense to provide a vectorized reader, similar to how RC and 
 ORC formats have, to benefit from vectorized execution engine.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Work started] (HIVE-6799) HiveServer2 needs to map kerberos name to local name before proxy check

2014-04-01 Thread Dilli Arumugam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-6799 started by Dilli Arumugam.

 HiveServer2 needs to map kerberos name to local name before proxy check
 ---

 Key: HIVE-6799
 URL: https://issues.apache.org/jira/browse/HIVE-6799
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Reporter: Dilli Arumugam
Assignee: Dilli Arumugam

 HiveServer2 does not map kerberos name of authenticated principal to local 
 name.
 Due to this, I get error like the following in HiveServer log:
 Failed to validate proxy privilage of knox/hdps.example.com for sam
 I have KINITED as knox/hdps.example@example.com
 I do have the following in core-site.xml
   property
 namehadoop.proxyuser.knox.groups/name
 valueusers/value
   /property
   property
 namehadoop.proxyuser.knox.hosts/name
 value*/value
   /property



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6799) HiveServer2 needs to map kerberos name to local name before proxy check

2014-04-01 Thread Dilli Arumugam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dilli Arumugam updated HIVE-6799:
-

Attachment: HIVE-6799.patch

Patch to resolve the issue



 HiveServer2 needs to map kerberos name to local name before proxy check
 ---

 Key: HIVE-6799
 URL: https://issues.apache.org/jira/browse/HIVE-6799
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Reporter: Dilli Arumugam
Assignee: Dilli Arumugam
 Attachments: HIVE-6799.patch


 HiveServer2 does not map kerberos name of authenticated principal to local 
 name.
 Due to this, I get error like the following in HiveServer log:
 Failed to validate proxy privilage of knox/hdps.example.com for sam
 I have KINITED as knox/hdps.example@example.com
 I do have the following in core-site.xml
   property
 namehadoop.proxyuser.knox.groups/name
 valueusers/value
   /property
   property
 namehadoop.proxyuser.knox.hosts/name
 value*/value
   /property



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Review Request 19880: HiveServer2 needs to map kerberos name to local name before proxy check

2014-04-01 Thread dilli dorai

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19880/
---

Review request for hive, Thejas Nair and Vaibhav Gumashta.


Bugs: HIVE-6799
https://issues.apache.org/jira/browse/HIVE-6799


Repository: hive-git


Description
---

see hive jira https://issues.apache.org/jira/browse/HIVE-6799


Diffs
-

  service/src/java/org/apache/hive/service/auth/HiveAuthFactory.java d8f4822 

Diff: https://reviews.apache.org/r/19880/diff/


Testing
---

Before the patch

Hive Server2 log file reported exception with message
Failed to validate proxy privilage of knox/hdps.example.com for sam

The intermediary service with kerberos principal name knox/hdps.example.com was 
not able to proxy user sam.

After the patch.
Hive Server2 log does not  reported exception.

The intermediary service with kerberos principal name knox/hdps.example.com was 
able to proxy user sam and create table as sam.


Thanks,

dilli dorai



[jira] [Commented] (HIVE-6799) HiveServer2 needs to map kerberos name to local name before proxy check

2014-04-01 Thread Dilli Arumugam (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956995#comment-13956995
 ] 

Dilli Arumugam commented on HIVE-6799:
--

Testing done for the patch

Before the patch

Hive Server2 log file reported exception with message
Failed to validate proxy privilage of knox/hdps.example.com for sam

The intermediary service with kerberos principal name knox/hdps.example.com was 
not able to proxy user sam.

After the patch.
Hive Server2 log does not  reported exception.

The intermediary service with kerberos principal name knox/hdps.example.com was 
able to proxy user sam and create table as sam.

 HiveServer2 needs to map kerberos name to local name before proxy check
 ---

 Key: HIVE-6799
 URL: https://issues.apache.org/jira/browse/HIVE-6799
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Reporter: Dilli Arumugam
Assignee: Dilli Arumugam
 Attachments: HIVE-6799.patch


 HiveServer2 does not map kerberos name of authenticated principal to local 
 name.
 Due to this, I get error like the following in HiveServer log:
 Failed to validate proxy privilage of knox/hdps.example.com for sam
 I have KINITED as knox/hdps.example@example.com
 I do have the following in core-site.xml
   property
 namehadoop.proxyuser.knox.groups/name
 valueusers/value
   /property
   property
 namehadoop.proxyuser.knox.hosts/name
 value*/value
   /property



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 19880: HiveServer2 needs to map kerberos name to local name before proxy check

2014-04-01 Thread Vaibhav Gumashta

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19880/#review39202
---

Ship it!


Ship It!

- Vaibhav Gumashta


On April 1, 2014, 8:47 p.m., dilli dorai wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/19880/
 ---
 
 (Updated April 1, 2014, 8:47 p.m.)
 
 
 Review request for hive, Thejas Nair and Vaibhav Gumashta.
 
 
 Bugs: HIVE-6799
 https://issues.apache.org/jira/browse/HIVE-6799
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 see hive jira https://issues.apache.org/jira/browse/HIVE-6799
 
 
 Diffs
 -
 
   service/src/java/org/apache/hive/service/auth/HiveAuthFactory.java d8f4822 
 
 Diff: https://reviews.apache.org/r/19880/diff/
 
 
 Testing
 ---
 
 Before the patch
 
 Hive Server2 log file reported exception with message
 Failed to validate proxy privilage of knox/hdps.example.com for sam
 
 The intermediary service with kerberos principal name knox/hdps.example.com 
 was not able to proxy user sam.
 
 After the patch.
 Hive Server2 log does not  reported exception.
 
 The intermediary service with kerberos principal name knox/hdps.example.com 
 was able to proxy user sam and create table as sam.
 
 
 Thanks,
 
 dilli dorai
 




[jira] [Commented] (HIVE-6799) HiveServer2 needs to map kerberos name to local name before proxy check

2014-04-01 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956996#comment-13956996
 ] 

Vaibhav Gumashta commented on HIVE-6799:


+1 (non-binding). Patch looks good.

 HiveServer2 needs to map kerberos name to local name before proxy check
 ---

 Key: HIVE-6799
 URL: https://issues.apache.org/jira/browse/HIVE-6799
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Reporter: Dilli Arumugam
Assignee: Dilli Arumugam
 Attachments: HIVE-6799.patch


 HiveServer2 does not map kerberos name of authenticated principal to local 
 name.
 Due to this, I get error like the following in HiveServer log:
 Failed to validate proxy privilage of knox/hdps.example.com for sam
 I have KINITED as knox/hdps.example@example.com
 I do have the following in core-site.xml
   property
 namehadoop.proxyuser.knox.groups/name
 valueusers/value
   /property
   property
 namehadoop.proxyuser.knox.hosts/name
 value*/value
   /property



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6766) HCatLoader always returns Char datatype with maxlength(255) when table format is ORC

2014-04-01 Thread Harish Butani (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956999#comment-13956999
 ] 

Harish Butani commented on HIVE-6766:
-

+1 for 0.13

 HCatLoader always returns Char datatype with maxlength(255)  when table 
 format is ORC
 -

 Key: HIVE-6766
 URL: https://issues.apache.org/jira/browse/HIVE-6766
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.13.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
Priority: Critical
 Attachments: HIVE-6766.1.patch, HIVE-6766.patch


 attached patch contains
 org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer#testWriteChar()
 which shows that char(5) value written to Hive (ORC) table using HCatStorer 
 will come back as char(255) when read with HCatLoader.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6799) HiveServer2 needs to map kerberos name to local name before proxy check

2014-04-01 Thread Dilli Arumugam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dilli Arumugam updated HIVE-6799:
-

Attachment: HIVE-6799.1.patch

Patch to resolve the issue.
There is no difference between previously attached HIVE-6799.patch and the 
current HIVE-6799.1.patch.
This current patch is added just to keep the automated precommite process 
happy. Not sure whether precommit would handle the patch without .1 prefix 
correctly.

 HiveServer2 needs to map kerberos name to local name before proxy check
 ---

 Key: HIVE-6799
 URL: https://issues.apache.org/jira/browse/HIVE-6799
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Reporter: Dilli Arumugam
Assignee: Dilli Arumugam
 Attachments: HIVE-6799.1.patch, HIVE-6799.patch


 HiveServer2 does not map kerberos name of authenticated principal to local 
 name.
 Due to this, I get error like the following in HiveServer log:
 Failed to validate proxy privilage of knox/hdps.example.com for sam
 I have KINITED as knox/hdps.example@example.com
 I do have the following in core-site.xml
   property
 namehadoop.proxyuser.knox.groups/name
 valueusers/value
   /property
   property
 namehadoop.proxyuser.knox.hosts/name
 value*/value
   /property



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6766) HCatLoader always returns Char datatype with maxlength(255) when table format is ORC

2014-04-01 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-6766:


   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Patch committed to trunk and 0.13 branch.
Thanks for the contribution Eugene, and for the review Sushanth!


 HCatLoader always returns Char datatype with maxlength(255)  when table 
 format is ORC
 -

 Key: HIVE-6766
 URL: https://issues.apache.org/jira/browse/HIVE-6766
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.13.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
Priority: Critical
 Fix For: 0.13.0

 Attachments: HIVE-6766.1.patch, HIVE-6766.patch


 attached patch contains
 org.apache.hive.hcatalog.pig.TestOrcHCatPigStorer#testWriteChar()
 which shows that char(5) value written to Hive (ORC) table using HCatStorer 
 will come back as char(255) when read with HCatLoader.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6789) HiveStatement client transport lock should unlock in finally block.

2014-04-01 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-6789:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Patch committed to 0.13 branch and trunk.
Thanks for the contribution Vaibhav!


 HiveStatement client transport lock should unlock in finally block.
 ---

 Key: HIVE-6789
 URL: https://issues.apache.org/jira/browse/HIVE-6789
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.13.0
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.13.0

 Attachments: HIVE-6789.1.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6786) Off by one error in ORC PPD

2014-04-01 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-6786:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk and 0.13

 Off by one error in ORC PPD 
 

 Key: HIVE-6786
 URL: https://issues.apache.org/jira/browse/HIVE-6786
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.0
Reporter: Gopal V
Assignee: Prasanth J
Priority: Critical
 Fix For: 0.13.0

 Attachments: HIVE-6786.1.patch


 Turning on ORC PPD makes split computation fail for a 10Tb benchmark.
 Narrowed down to the following code fragment
 https://github.com/apache/hive/blob/branch-0.13/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java#L757
 {code}
 includeStripe[i] = (i  stripeStats.size()) ||
 isStripeSatisfyPredicate(stripeStats.get(i), sarg,
  filterColumns);
 {code}
 I would guess that should be a =, but [~prasanth_j], can you comment if that 
 is the right fix?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6797) Add protection against divide by zero in stats annotation

2014-04-01 Thread Prasanth J (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957031#comment-13957031
 ] 

Prasanth J commented on HIVE-6797:
--

The failures are unrelated

 Add protection against divide by zero in stats annotation
 -

 Key: HIVE-6797
 URL: https://issues.apache.org/jira/browse/HIVE-6797
 Project: Hive
  Issue Type: Sub-task
  Components: Query Processor, Statistics
Affects Versions: 0.13.0
Reporter: Prasanth J
Assignee: Prasanth J
 Fix For: 0.13.0

 Attachments: HIVE-6797.1.patch, HIVE-6797.2.patch


 In stats annotation, the denominator computation in join operator is not 
 protected for divide by zero exception. It will be an issue when NDV (count 
 distinct) updated by updateStats() becomes 0. This patch adds protection in 
 updateStats() method to prevent divide-by-zero in downstream operators.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6799) HiveServer2 needs to map kerberos name to local name before proxy check

2014-04-01 Thread Dilli Arumugam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dilli Arumugam updated HIVE-6799:
-

Attachment: HIVE-6799.2.patch

cumulative patch with changing the level for log message from info to debug

 HiveServer2 needs to map kerberos name to local name before proxy check
 ---

 Key: HIVE-6799
 URL: https://issues.apache.org/jira/browse/HIVE-6799
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Reporter: Dilli Arumugam
Assignee: Dilli Arumugam
 Attachments: HIVE-6799.1.patch, HIVE-6799.2.patch, HIVE-6799.patch


 HiveServer2 does not map kerberos name of authenticated principal to local 
 name.
 Due to this, I get error like the following in HiveServer log:
 Failed to validate proxy privilage of knox/hdps.example.com for sam
 I have KINITED as knox/hdps.example@example.com
 I do have the following in core-site.xml
   property
 namehadoop.proxyuser.knox.groups/name
 valueusers/value
   /property
   property
 namehadoop.proxyuser.knox.hosts/name
 value*/value
   /property



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 19880: HiveServer2 needs to map kerberos name to local name before proxy check

2014-04-01 Thread dilli dorai

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19880/
---

(Updated April 1, 2014, 9:29 p.m.)


Review request for hive, Thejas Nair and Vaibhav Gumashta.


Changes
---

change the log level for log message from info to debug


Bugs: HIVE-6799
https://issues.apache.org/jira/browse/HIVE-6799


Repository: hive-git


Description
---

see hive jira https://issues.apache.org/jira/browse/HIVE-6799


Diffs (updated)
-

  service/src/java/org/apache/hive/service/auth/HiveAuthFactory.java d8f4822 

Diff: https://reviews.apache.org/r/19880/diff/


Testing
---

Before the patch

Hive Server2 log file reported exception with message
Failed to validate proxy privilage of knox/hdps.example.com for sam

The intermediary service with kerberos principal name knox/hdps.example.com was 
not able to proxy user sam.

After the patch.
Hive Server2 log does not  reported exception.

The intermediary service with kerberos principal name knox/hdps.example.com was 
able to proxy user sam and create table as sam.


Thanks,

dilli dorai



[jira] [Updated] (HIVE-6797) Add protection against divide by zero in stats annotation

2014-04-01 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-6797:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk and 0.13
thanks Prasanth.

 Add protection against divide by zero in stats annotation
 -

 Key: HIVE-6797
 URL: https://issues.apache.org/jira/browse/HIVE-6797
 Project: Hive
  Issue Type: Sub-task
  Components: Query Processor, Statistics
Affects Versions: 0.13.0
Reporter: Prasanth J
Assignee: Prasanth J
 Fix For: 0.13.0

 Attachments: HIVE-6797.1.patch, HIVE-6797.2.patch


 In stats annotation, the denominator computation in join operator is not 
 protected for divide by zero exception. It will be an issue when NDV (count 
 distinct) updated by updateStats() becomes 0. This patch adds protection in 
 updateStats() method to prevent divide-by-zero in downstream operators.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6031) explain subquery rewrite for where clause predicates

2014-04-01 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-6031:


Attachment: HIVE-6031.2.patch

 explain subquery rewrite for where clause predicates 
 -

 Key: HIVE-6031
 URL: https://issues.apache.org/jira/browse/HIVE-6031
 Project: Hive
  Issue Type: Sub-task
Reporter: Harish Butani
Assignee: Harish Butani
 Attachments: HIVE-6031.1.patch, HIVE-6031.2.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6031) explain subquery rewrite for where clause predicates

2014-04-01 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-6031:


Status: Patch Available  (was: Open)

 explain subquery rewrite for where clause predicates 
 -

 Key: HIVE-6031
 URL: https://issues.apache.org/jira/browse/HIVE-6031
 Project: Hive
  Issue Type: Sub-task
Reporter: Harish Butani
Assignee: Harish Butani
 Attachments: HIVE-6031.1.patch, HIVE-6031.2.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6780) Set tez credential file property along with MR conf property for Tez jobs

2014-04-01 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-6780:
-

Status: Patch Available  (was: Open)

 Set tez credential file property along with MR conf property for Tez jobs
 -

 Key: HIVE-6780
 URL: https://issues.apache.org/jira/browse/HIVE-6780
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Affects Versions: 0.13.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Attachments: HIVE-6780.2.patch, HIVE-6780.patch


 webhcat should set the additional property - tez.credentials.path to the 
 same value as the MapReduce property.
 WebHCat should always proactively set this tez.credentials.path property to 
 the same value and in the same cases where it is setting the MR equivalent 
 property.
 NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6780) Set tez credential file property along with MR conf property for Tez jobs

2014-04-01 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-6780:
-

Attachment: HIVE-6780.2.patch

rebased and addressed Thejas' comments

 Set tez credential file property along with MR conf property for Tez jobs
 -

 Key: HIVE-6780
 URL: https://issues.apache.org/jira/browse/HIVE-6780
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Affects Versions: 0.13.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Attachments: HIVE-6780.2.patch, HIVE-6780.patch


 webhcat should set the additional property - tez.credentials.path to the 
 same value as the MapReduce property.
 WebHCat should always proactively set this tez.credentials.path property to 
 the same value and in the same cases where it is setting the MR equivalent 
 property.
 NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6780) Set tez credential file property along with MR conf property for Tez jobs

2014-04-01 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-6780:
-

Status: Open  (was: Patch Available)

 Set tez credential file property along with MR conf property for Tez jobs
 -

 Key: HIVE-6780
 URL: https://issues.apache.org/jira/browse/HIVE-6780
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Affects Versions: 0.13.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Attachments: HIVE-6780.2.patch, HIVE-6780.patch


 webhcat should set the additional property - tez.credentials.path to the 
 same value as the MapReduce property.
 WebHCat should always proactively set this tez.credentials.path property to 
 the same value and in the same cases where it is setting the MR equivalent 
 property.
 NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Review Request 19889: Create/drop roles is case-sensitive whereas 'set role' is case insensitive

2014-04-01 Thread Ashutosh Chauhan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19889/
---

Review request for hive and Thejas Nair.


Bugs: HIVE-6796
https://issues.apache.org/jira/browse/HIVE-6796


Repository: hive-git


Description
---

Create/drop roles is case-sensitive whereas 'set role' is case insensitive


Diffs
-

  metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java 
27077b4 
  
ql/src/java/org/apache/hadoop/hive/ql/parse/authorization/HiveAuthorizationTaskFactoryImpl.java
 f4dd97b 
  ql/src/java/org/apache/hadoop/hive/ql/plan/GrantRevokeRoleDDL.java d8488a7 
  ql/src/java/org/apache/hadoop/hive/ql/plan/PrincipalDesc.java 7dc0ded 
  ql/src/java/org/apache/hadoop/hive/ql/plan/RoleDDLDesc.java b4da3d1 
  
ql/src/java/org/apache/hadoop/hive/ql/security/authorization/plugin/HivePrincipal.java
 62b8994 
  ql/src/test/results/clientnegative/authorization_drop_db_cascade.q.out 
eda2146 
  ql/src/test/results/clientnegative/authorization_drop_db_empty.q.out 27a6822 
  ql/src/test/results/clientnegative/authorization_drop_role_no_admin.q.out 
c03876d 
  ql/src/test/results/clientnegative/authorization_fail_7.q.out 69d 
  ql/src/test/results/clientnegative/authorization_priv_current_role_neg.q.out 
7f983ba 
  ql/src/test/results/clientnegative/authorization_public_create.q.out bccdc53 
  ql/src/test/results/clientnegative/authorization_public_drop.q.out 14f6b3a 
  ql/src/test/results/clientnegative/authorization_role_grant.q.out 0f88444 
  ql/src/test/results/clientnegative/authorization_rolehierarchy_privs.q.out 
7268370 
  ql/src/test/results/clientnegative/authorize_grant_public.q.out dae4331 
  ql/src/test/results/clientnegative/authorize_revoke_public.q.out cff88ca 
  ql/src/test/results/clientpositive/authorization_1.q.out 1c52151 
  ql/src/test/results/clientpositive/authorization_1_sql_std.q.out 3e39801 
  ql/src/test/results/clientpositive/authorization_5.q.out 3353adf 
  ql/src/test/results/clientpositive/authorization_9.q.out 3ec988c 
  ql/src/test/results/clientpositive/authorization_admin_almighty1.q.out 
df0d5c4 
  ql/src/test/results/clientpositive/authorization_role_grant1.q.out 305dd9d 
  ql/src/test/results/clientpositive/authorization_role_grant2.q.out f294311 
  ql/src/test/results/clientpositive/authorization_set_show_current_role.q.out 
d5fbc48 
  ql/src/test/results/clientpositive/authorization_view_sqlstd.q.out b431c35 

Diff: https://reviews.apache.org/r/19889/diff/


Testing
---

Updated existing test cases.


Thanks,

Ashutosh Chauhan



[jira] [Updated] (HIVE-6796) Create/drop roles is case-sensitive whereas 'set role' is case insensitive

2014-04-01 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6796:
---

Attachment: HIVE-6796.patch

 Create/drop roles is case-sensitive whereas 'set role' is case insensitive
 --

 Key: HIVE-6796
 URL: https://issues.apache.org/jira/browse/HIVE-6796
 Project: Hive
  Issue Type: Bug
Reporter: Deepesh Khandelwal
Assignee: Ashutosh Chauhan
 Attachments: HIVE-6796.patch


 Create/drop role operations should be case insensitive.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6796) Create/drop roles is case-sensitive whereas 'set role' is case insensitive

2014-04-01 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-6796:
---

Status: Patch Available  (was: Open)

 Create/drop roles is case-sensitive whereas 'set role' is case insensitive
 --

 Key: HIVE-6796
 URL: https://issues.apache.org/jira/browse/HIVE-6796
 Project: Hive
  Issue Type: Bug
Reporter: Deepesh Khandelwal
Assignee: Ashutosh Chauhan
 Attachments: HIVE-6796.patch


 Create/drop role operations should be case insensitive.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Review Request 19893: HiveServer2 client on windows does not handle the non-ascii characters properly

2014-04-01 Thread Vaibhav Gumashta

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19893/
---

Review request for hive and Thejas Nair.


Bugs: HIVE-6068
https://issues.apache.org/jira/browse/HIVE-6068


Repository: hive-git


Description
---

https://issues.apache.org/jira/browse/HIVE-6068


Diffs
-

  data/files/non_ascii_tbl.txt PRE-CREATION 
  itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcDriver2.java 
0163788 
  service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java 
6c604ce 

Diff: https://reviews.apache.org/r/19893/diff/


Testing
---

New test added to TestJdbcDriver2


Thanks,

Vaibhav Gumashta



[jira] [Updated] (HIVE-6068) HiveServer2 client on windows does not handle the non-ascii characters properly

2014-04-01 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-6068:
---

Attachment: HIVE-6068.2.patch

 HiveServer2 client on windows does not handle the non-ascii characters 
 properly
 ---

 Key: HIVE-6068
 URL: https://issues.apache.org/jira/browse/HIVE-6068
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2, JDBC
Affects Versions: 0.13.0
 Environment: Windows 
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.13.0

 Attachments: HIVE-6068.1.patch, HIVE-6068.2.patch


 When running a select query against a table which contains rows with 
 non-ascii characters HiveServer2 Beeline client returns them wrong. Example:
 {noformat}
 738;Garçu, Le (1995);Drama
 741;Ghost in the Shell (Kôkaku kidôtai) (1995);Animation|Sci-Fi
 {noformat}
 come out from a HiveServer2 beeline client as:
 {noformat}
 '738' 'Gar?u, Le (1995)'  'Drama'
 '741' 'Ghost in the Shell (K?kaku kid?tai) (1995)''Animation|Sci-Fi'
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6068) HiveServer2 client on windows does not handle the non-ascii characters properly

2014-04-01 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-6068:
---

Status: Patch Available  (was: Open)

 HiveServer2 client on windows does not handle the non-ascii characters 
 properly
 ---

 Key: HIVE-6068
 URL: https://issues.apache.org/jira/browse/HIVE-6068
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2, JDBC
Affects Versions: 0.13.0
 Environment: Windows 
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.13.0

 Attachments: HIVE-6068.1.patch, HIVE-6068.2.patch


 When running a select query against a table which contains rows with 
 non-ascii characters HiveServer2 Beeline client returns them wrong. Example:
 {noformat}
 738;Garçu, Le (1995);Drama
 741;Ghost in the Shell (Kôkaku kidôtai) (1995);Animation|Sci-Fi
 {noformat}
 come out from a HiveServer2 beeline client as:
 {noformat}
 '738' 'Gar?u, Le (1995)'  'Drama'
 '741' 'Ghost in the Shell (K?kaku kid?tai) (1995)''Animation|Sci-Fi'
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6068) HiveServer2 client on windows does not handle the non-ascii characters properly

2014-04-01 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-6068:
---

Status: Open  (was: Patch Available)

 HiveServer2 client on windows does not handle the non-ascii characters 
 properly
 ---

 Key: HIVE-6068
 URL: https://issues.apache.org/jira/browse/HIVE-6068
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2, JDBC
Affects Versions: 0.13.0
 Environment: Windows 
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.13.0

 Attachments: HIVE-6068.1.patch, HIVE-6068.2.patch


 When running a select query against a table which contains rows with 
 non-ascii characters HiveServer2 Beeline client returns them wrong. Example:
 {noformat}
 738;Garçu, Le (1995);Drama
 741;Ghost in the Shell (Kôkaku kidôtai) (1995);Animation|Sci-Fi
 {noformat}
 come out from a HiveServer2 beeline client as:
 {noformat}
 '738' 'Gar?u, Le (1995)'  'Drama'
 '741' 'Ghost in the Shell (K?kaku kid?tai) (1995)''Animation|Sci-Fi'
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6068) HiveServer2 client on windows does not handle the non-ascii characters properly

2014-04-01 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957070#comment-13957070
 ] 

Vaibhav Gumashta commented on HIVE-6068:


[~thejas] Thanks for the review. New patch incorporates the feedback.

 HiveServer2 client on windows does not handle the non-ascii characters 
 properly
 ---

 Key: HIVE-6068
 URL: https://issues.apache.org/jira/browse/HIVE-6068
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2, JDBC
Affects Versions: 0.13.0
 Environment: Windows 
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.13.0

 Attachments: HIVE-6068.1.patch, HIVE-6068.2.patch


 When running a select query against a table which contains rows with 
 non-ascii characters HiveServer2 Beeline client returns them wrong. Example:
 {noformat}
 738;Garçu, Le (1995);Drama
 741;Ghost in the Shell (Kôkaku kidôtai) (1995);Animation|Sci-Fi
 {noformat}
 come out from a HiveServer2 beeline client as:
 {noformat}
 '738' 'Gar?u, Le (1995)'  'Drama'
 '741' 'Ghost in the Shell (K?kaku kid?tai) (1995)''Animation|Sci-Fi'
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6780) Set tez credential file property along with MR conf property for Tez jobs

2014-04-01 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957086#comment-13957086
 ] 

Thejas M Nair commented on HIVE-6780:
-

+1

 Set tez credential file property along with MR conf property for Tez jobs
 -

 Key: HIVE-6780
 URL: https://issues.apache.org/jira/browse/HIVE-6780
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Affects Versions: 0.13.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Attachments: HIVE-6780.2.patch, HIVE-6780.patch


 webhcat should set the additional property - tez.credentials.path to the 
 same value as the MapReduce property.
 WebHCat should always proactively set this tez.credentials.path property to 
 the same value and in the same cases where it is setting the MR equivalent 
 property.
 NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-5814) Add DATE, TIMESTAMP, DECIMAL, CHAR, VARCHAR types support in HCat

2014-04-01 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957089#comment-13957089
 ] 

Eugene Koifman commented on HIVE-5814:
--

[~leftylev] The feature is complete but the doc changes are still needed.

 Add DATE, TIMESTAMP, DECIMAL, CHAR, VARCHAR types support in HCat
 -

 Key: HIVE-5814
 URL: https://issues.apache.org/jira/browse/HIVE-5814
 Project: Hive
  Issue Type: New Feature
  Components: HCatalog
Affects Versions: 0.12.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Fix For: 0.13.0

 Attachments: HCat-Pig Type Mapping Hive 0.13.pdf, HIVE-5814.2.patch, 
 HIVE-5814.3.patch, HIVE-5814.4.patch, HIVE-5814.5.patch


 Hive 0.12 added support for new data types.  Pig 0.12 added some as well.  
 HCat should handle these as well.Also note that CHAR was added recently.
 Also allow user to specify a parameter in Pig like so HCatStorer('','', 
 '-onOutOfRangeValue Throw') to control what happens when Pig's value is out 
 of range for target Hive column.  Valid values for the option are Throw and 
 Null.  Throw - make the runtime raise an exception, Null, which is the 
 default, means NULL is written to target column and a message to that effect 
 is emitted to the log.  Only 1 message per column/data type is sent to the 
 log.
 See attached HCat-Pig Type Mapping Hive 0.13.pdf for exact mappings.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6133) Support partial partition exchange

2014-04-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957095#comment-13957095
 ] 

Hive QA commented on HIVE-6133:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12638030/HIVE-6133.1.patch.txt

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 5520 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_infer_bucket_sort_dyn_part
{noformat}

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2070/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2070/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12638030

 Support partial partition exchange
 --

 Key: HIVE-6133
 URL: https://issues.apache.org/jira/browse/HIVE-6133
 Project: Hive
  Issue Type: Improvement
  Components: Query Processor
Reporter: Navis
Assignee: Navis
Priority: Minor
 Attachments: HIVE-6133.1.patch.txt


 Current alter exchange coerces source and destination table to have same 
 partition columns. But source table has sub-set of partitions and provided 
 partition spec supplements to be a complete partition spec, it need not to be 
 that.
 For example, 
 {noformat}
 CREATE TABLE exchange_part_test1 (f1 string) PARTITIONED BY (ds STRING);
 CREATE TABLE exchange_part_test2 (f1 string) 
 ALTER TABLE exchange_part_test1 EXCHANGE PARTITION (ds='2013-04-05') WITH 
 TABLE exchange_part_test2;
 {noformat}
 or 
 {noformat}
 CREATE TABLE exchange_part_test1 (f1 string) PARTITIONED BY (ds STRING, hr 
 STRING);
 CREATE TABLE exchange_part_test2 (f1 string) PARTITIONED BY (hr STRING)
 ALTER TABLE exchange_part_test1 EXCHANGE PARTITION (ds='2013-04-05') WITH 
 TABLE exchange_part_test2;
 {noformat}
 can be possible.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 19893: HiveServer2 client on windows does not handle the non-ascii characters properly

2014-04-01 Thread Thejas Nair

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19893/#review39206
---

Ship it!


Ship It!

- Thejas Nair


On April 1, 2014, 10:07 p.m., Vaibhav Gumashta wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/19893/
 ---
 
 (Updated April 1, 2014, 10:07 p.m.)
 
 
 Review request for hive and Thejas Nair.
 
 
 Bugs: HIVE-6068
 https://issues.apache.org/jira/browse/HIVE-6068
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 https://issues.apache.org/jira/browse/HIVE-6068
 
 
 Diffs
 -
 
   data/files/non_ascii_tbl.txt PRE-CREATION 
   itests/hive-unit/src/test/java/org/apache/hive/jdbc/TestJdbcDriver2.java 
 0163788 
   service/src/java/org/apache/hive/service/cli/operation/SQLOperation.java 
 6c604ce 
 
 Diff: https://reviews.apache.org/r/19893/diff/
 
 
 Testing
 ---
 
 New test added to TestJdbcDriver2
 
 
 Thanks,
 
 Vaibhav Gumashta
 




[jira] [Commented] (HIVE-6068) HiveServer2 client on windows does not handle the non-ascii characters properly

2014-04-01 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957101#comment-13957101
 ] 

Thejas M Nair commented on HIVE-6068:
-

+1

 HiveServer2 client on windows does not handle the non-ascii characters 
 properly
 ---

 Key: HIVE-6068
 URL: https://issues.apache.org/jira/browse/HIVE-6068
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2, JDBC
Affects Versions: 0.13.0
 Environment: Windows 
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.13.0

 Attachments: HIVE-6068.1.patch, HIVE-6068.2.patch


 When running a select query against a table which contains rows with 
 non-ascii characters HiveServer2 Beeline client returns them wrong. Example:
 {noformat}
 738;Garçu, Le (1995);Drama
 741;Ghost in the Shell (Kôkaku kidôtai) (1995);Animation|Sci-Fi
 {noformat}
 come out from a HiveServer2 beeline client as:
 {noformat}
 '738' 'Gar?u, Le (1995)'  'Drama'
 '741' 'Ghost in the Shell (K?kaku kid?tai) (1995)''Animation|Sci-Fi'
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6804) sql std auth - granting existing table privilege to owner should result in error

2014-04-01 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-6804:


Status: Patch Available  (was: Open)

 sql std auth - granting existing table privilege to owner should result in 
 error
 

 Key: HIVE-6804
 URL: https://issues.apache.org/jira/browse/HIVE-6804
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Reporter: Deepesh Khandelwal
Assignee: Thejas M Nair
 Attachments: HIVE-6804.1.patch


 Table owner gets all privileges on the table at the time of table creation.
 But granting some or all of the privileges using grant statement still works 
 resulting in duplicate privileges. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HIVE-5376) Hive does not honor type for partition columns when altering column type

2014-04-01 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan reassigned HIVE-5376:
---

Assignee: Hari Sankar Sivarama Subramaniyan  (was: Vikram Dixit K)

 Hive does not honor type for partition columns when altering column type
 

 Key: HIVE-5376
 URL: https://issues.apache.org/jira/browse/HIVE-5376
 Project: Hive
  Issue Type: Bug
  Components: CLI
Reporter: Sergey Shelukhin
Assignee: Hari Sankar Sivarama Subramaniyan

 Followup for HIVE-5297. If partition column of type string is changed to int, 
 the data is not verified. The values for partition columns are all in 
 metastore db, so it's easy to check and fail the type change.
 alter_partition_coltype.q (or some other test?) checks this behavior right 
 now.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-5687) Streaming support in Hive

2014-04-01 Thread Roshan Naik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roshan Naik updated HIVE-5687:
--

Attachment: Hive Streaming Ingest API for v4 patch.pdf
HIVE-5687.v4.patch

v4 patch .. Adding JSON writer suport, tweaks to JavaDocs.
Updated pdf Document

 Streaming support in Hive
 -

 Key: HIVE-5687
 URL: https://issues.apache.org/jira/browse/HIVE-5687
 Project: Hive
  Issue Type: Sub-task
Reporter: Roshan Naik
Assignee: Roshan Naik
 Attachments: 5687-api-spec4.pdf, 5687-draft-api-spec.pdf, 
 5687-draft-api-spec2.pdf, 5687-draft-api-spec3.pdf, HIVE-5687.patch, 
 HIVE-5687.v2.patch, HIVE-5687.v3.patch, HIVE-5687.v4.patch, Hive Streaming 
 Ingest API for v3 patch.pdf, Hive Streaming Ingest API for v4 patch.pdf


 Implement support for Streaming data into HIVE.
 - Provide a client streaming API 
 - Transaction support: Clients should be able to periodically commit a batch 
 of records atomically
 - Immediate visibility: Records should be immediately visible to queries on 
 commit
 - Should not overload HDFS with too many small files
 Use Cases:
  - Streaming logs into HIVE via Flume
  - Streaming results of computations from Storm



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HIVE-6626) HiveServer2 does not expand the DOWNLOADED_RESOURCES_DIR path

2014-04-01 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta reassigned HIVE-6626:
--

Assignee: Vaibhav Gumashta

 HiveServer2 does not expand the DOWNLOADED_RESOURCES_DIR path
 -

 Key: HIVE-6626
 URL: https://issues.apache.org/jira/browse/HIVE-6626
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.0
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.14.0


 The downloaded scratch dir is specified in HiveConf as:
 {code}
 DOWNLOADED_RESOURCES_DIR(hive.downloaded.resources.dir, 
 System.getProperty(java.io.tmpdir) + File.separator  + 
 ${hive.session.id}_resources),
 {code}
 However, hive.session.id  does not get expanded.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Review Request 19754: Defines a api for streaming data into Hive using ACID support.

2014-04-01 Thread Roshan Naik

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/19754/
---

(Updated April 1, 2014, 11:53 p.m.)


Review request for hive.


Changes
---

updating patch


Bugs: HIVE-5687
https://issues.apache.org/jira/browse/HIVE-5687


Repository: hive-git


Description
---

Defines an API for streaming data into Hive using ACID support.


Diffs (updated)
-

  metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreUtils.java 
1bbe02e 
  packaging/pom.xml de9b002 
  packaging/src/main/assembly/src.xml bdaa47b 
  pom.xml 7343683 
  streaming/pom.xml PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/AbstractRecordWriter.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/ConnectionError.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/DelimitedInputWriter.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/HeartBeatFailure.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/HiveEndPoint.java PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/ImpersonationFailed.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/InvalidColumn.java PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/InvalidPartition.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/InvalidTable.java PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/InvalidTrasactionState.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/PartitionCreationFailed.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/QueryFailedException.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/RecordWriter.java PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/SerializationError.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/StreamingConnection.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/StreamingException.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/StreamingIOFailure.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/StrictJsonWriter.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/TransactionBatch.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/TransactionBatchUnAvailable.java 
PRE-CREATION 
  streaming/src/java/org/apache/hive/streaming/TransactionError.java 
PRE-CREATION 
  streaming/src/test/org/apache/hive/streaming/StreamingIntegrationTester.java 
PRE-CREATION 
  streaming/src/test/org/apache/hive/streaming/TestDelimitedInputWriter.java 
PRE-CREATION 
  streaming/src/test/org/apache/hive/streaming/TestStreaming.java PRE-CREATION 
  streaming/src/test/sit PRE-CREATION 

Diff: https://reviews.apache.org/r/19754/diff/


Testing
---

Unit tests included. Also done manual testing by streaming data using flume.


Thanks,

Roshan Naik



[jira] [Commented] (HIVE-6788) Abandoned opened transactions not being timed out

2014-04-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957188#comment-13957188
 ] 

Hive QA commented on HIVE-6788:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12638090/HIVE-6788.patch

{color:green}SUCCESS:{color} +1 5539 tests passed

Test results: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2072/testReport
Console output: 
http://bigtop01.cloudera.org:8080/job/PreCommit-HIVE-Build/2072/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12638090

 Abandoned opened transactions not being timed out
 -

 Key: HIVE-6788
 URL: https://issues.apache.org/jira/browse/HIVE-6788
 Project: Hive
  Issue Type: Bug
  Components: Locking
Affects Versions: 0.13.0
Reporter: Alan Gates
Assignee: Alan Gates
 Attachments: HIVE-6788.patch


 If a client abandons an open transaction it is never closed.  This does not 
 cause any immediate problems (as locks are timed out) but it will eventually 
 lead to high levels of open transactions in the lists that readers need to be 
 aware of when reading tables or partitions.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-4329) HCatalog clients can't write to AvroSerde backed tables

2014-04-01 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-4329:
-

Assignee: David Chen

 HCatalog clients can't write to AvroSerde backed tables
 ---

 Key: HIVE-4329
 URL: https://issues.apache.org/jira/browse/HIVE-4329
 Project: Hive
  Issue Type: Bug
  Components: HCatalog, Serializers/Deserializers
Affects Versions: 0.10.0
 Environment: discovered in Pig, but it looks like the root cause 
 impacts all non-Hive users
Reporter: Sean Busbey
Assignee: David Chen

 Attempting to write to a HCatalog defined table backed by the AvroSerde fails 
 with the following stacktrace:
 {code}
 java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be 
 cast to org.apache.hadoop.io.LongWritable
   at 
 org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat$1.write(AvroContainerOutputFormat.java:84)
   at 
 org.apache.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:253)
   at 
 org.apache.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:53)
   at 
 org.apache.hcatalog.pig.HCatBaseStorer.putNext(HCatBaseStorer.java:242)
   at org.apache.hcatalog.pig.HCatStorer.putNext(HCatStorer.java:52)
   at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
   at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
   at 
 org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:559)
   at 
 org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:85)
 {code}
 The proximal cause of this failure is that the AvroContainerOutputFormat's 
 signature mandates a LongWritable key and HCat's FileRecordWriterContainer 
 forces a NullWritable. I'm not sure of a general fix, other than redefining 
 HiveOutputFormat to mandate a WritableComparable.
 It looks like accepting WritableComparable is what's done in the other Hive 
 OutputFormats, and there's no reason AvroContainerOutputFormat couldn't also 
 be changed, since it's ignoring the key. That way fixing things so 
 FileRecordWriterContainer can always use NullWritable could get spun into a 
 different issue?
 The underlying cause for failure to write to AvroSerde tables is that 
 AvroContainerOutputFormat doesn't meaningfully implement getRecordWriter, so 
 fixing the above will just push the failure into the placeholder RecordWriter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Work started] (HIVE-4329) HCatalog clients can't write to AvroSerde backed tables

2014-04-01 Thread David Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-4329 started by David Chen.

 HCatalog clients can't write to AvroSerde backed tables
 ---

 Key: HIVE-4329
 URL: https://issues.apache.org/jira/browse/HIVE-4329
 Project: Hive
  Issue Type: Bug
  Components: HCatalog, Serializers/Deserializers
Affects Versions: 0.10.0
 Environment: discovered in Pig, but it looks like the root cause 
 impacts all non-Hive users
Reporter: Sean Busbey
Assignee: David Chen

 Attempting to write to a HCatalog defined table backed by the AvroSerde fails 
 with the following stacktrace:
 {code}
 java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be 
 cast to org.apache.hadoop.io.LongWritable
   at 
 org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat$1.write(AvroContainerOutputFormat.java:84)
   at 
 org.apache.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:253)
   at 
 org.apache.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:53)
   at 
 org.apache.hcatalog.pig.HCatBaseStorer.putNext(HCatBaseStorer.java:242)
   at org.apache.hcatalog.pig.HCatStorer.putNext(HCatStorer.java:52)
   at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
   at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
   at 
 org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:559)
   at 
 org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:85)
 {code}
 The proximal cause of this failure is that the AvroContainerOutputFormat's 
 signature mandates a LongWritable key and HCat's FileRecordWriterContainer 
 forces a NullWritable. I'm not sure of a general fix, other than redefining 
 HiveOutputFormat to mandate a WritableComparable.
 It looks like accepting WritableComparable is what's done in the other Hive 
 OutputFormats, and there's no reason AvroContainerOutputFormat couldn't also 
 be changed, since it's ignoring the key. That way fixing things so 
 FileRecordWriterContainer can always use NullWritable could get spun into a 
 different issue?
 The underlying cause for failure to write to AvroSerde tables is that 
 AvroContainerOutputFormat doesn't meaningfully implement getRecordWriter, so 
 fixing the above will just push the failure into the placeholder RecordWriter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-4329) HCatalog should use getHiveRecordWriter rather than getRecordWriter

2014-04-01 Thread David Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Chen updated HIVE-4329:
-

Summary: HCatalog should use getHiveRecordWriter rather than 
getRecordWriter  (was: HCatalog clients can't write to AvroSerde backed tables)

 HCatalog should use getHiveRecordWriter rather than getRecordWriter
 ---

 Key: HIVE-4329
 URL: https://issues.apache.org/jira/browse/HIVE-4329
 Project: Hive
  Issue Type: Bug
  Components: HCatalog, Serializers/Deserializers
Affects Versions: 0.10.0
 Environment: discovered in Pig, but it looks like the root cause 
 impacts all non-Hive users
Reporter: Sean Busbey
Assignee: David Chen

 Attempting to write to a HCatalog defined table backed by the AvroSerde fails 
 with the following stacktrace:
 {code}
 java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be 
 cast to org.apache.hadoop.io.LongWritable
   at 
 org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat$1.write(AvroContainerOutputFormat.java:84)
   at 
 org.apache.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:253)
   at 
 org.apache.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:53)
   at 
 org.apache.hcatalog.pig.HCatBaseStorer.putNext(HCatBaseStorer.java:242)
   at org.apache.hcatalog.pig.HCatStorer.putNext(HCatStorer.java:52)
   at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
   at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
   at 
 org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:559)
   at 
 org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:85)
 {code}
 The proximal cause of this failure is that the AvroContainerOutputFormat's 
 signature mandates a LongWritable key and HCat's FileRecordWriterContainer 
 forces a NullWritable. I'm not sure of a general fix, other than redefining 
 HiveOutputFormat to mandate a WritableComparable.
 It looks like accepting WritableComparable is what's done in the other Hive 
 OutputFormats, and there's no reason AvroContainerOutputFormat couldn't also 
 be changed, since it's ignoring the key. That way fixing things so 
 FileRecordWriterContainer can always use NullWritable could get spun into a 
 different issue?
 The underlying cause for failure to write to AvroSerde tables is that 
 AvroContainerOutputFormat doesn't meaningfully implement getRecordWriter, so 
 fixing the above will just push the failure into the placeholder RecordWriter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6807) add HCatStorer ORC test to test missing columns

2014-04-01 Thread Eugene Koifman (JIRA)
Eugene Koifman created HIVE-6807:


 Summary: add HCatStorer ORC test to test missing columns
 Key: HIVE-6807
 URL: https://issues.apache.org/jira/browse/HIVE-6807
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.13.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6807) add HCatStorer ORC test to test missing columns

2014-04-01 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957200#comment-13957200
 ] 

Eugene Koifman commented on HIVE-6807:
--

enable a test introduced in HIVE-6766 now that HIVE-4975 is fixed

 add HCatStorer ORC test to test missing columns
 ---

 Key: HIVE-6807
 URL: https://issues.apache.org/jira/browse/HIVE-6807
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.13.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-4329) HCatalog should use getHiveRecordWriter rather than getRecordWriter

2014-04-01 Thread David Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957201#comment-13957201
 ] 

David Chen commented on HIVE-4329:
--

I think the correct fix for this is that HCatalog should be calling the 
{{OutputFormat}}s' {{getHiveRecordWriter}} rather than {{getRecordWriter}}. 
Since the purpose of HCatalog is to provide read and write interfaces and the 
Hive Metastore's services to non-Hive clients, existing SerDes should work out 
of the box.

Fixing it this way will also allow other SerDes, such as Parquet, to work with 
HCatalog as well since the ParquetSerDe currently has the same problem.

 HCatalog should use getHiveRecordWriter rather than getRecordWriter
 ---

 Key: HIVE-4329
 URL: https://issues.apache.org/jira/browse/HIVE-4329
 Project: Hive
  Issue Type: Bug
  Components: HCatalog, Serializers/Deserializers
Affects Versions: 0.10.0
 Environment: discovered in Pig, but it looks like the root cause 
 impacts all non-Hive users
Reporter: Sean Busbey
Assignee: David Chen

 Attempting to write to a HCatalog defined table backed by the AvroSerde fails 
 with the following stacktrace:
 {code}
 java.lang.ClassCastException: org.apache.hadoop.io.NullWritable cannot be 
 cast to org.apache.hadoop.io.LongWritable
   at 
 org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat$1.write(AvroContainerOutputFormat.java:84)
   at 
 org.apache.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:253)
   at 
 org.apache.hcatalog.mapreduce.FileRecordWriterContainer.write(FileRecordWriterContainer.java:53)
   at 
 org.apache.hcatalog.pig.HCatBaseStorer.putNext(HCatBaseStorer.java:242)
   at org.apache.hcatalog.pig.HCatStorer.putNext(HCatStorer.java:52)
   at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
   at 
 org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
   at 
 org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:559)
   at 
 org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:85)
 {code}
 The proximal cause of this failure is that the AvroContainerOutputFormat's 
 signature mandates a LongWritable key and HCat's FileRecordWriterContainer 
 forces a NullWritable. I'm not sure of a general fix, other than redefining 
 HiveOutputFormat to mandate a WritableComparable.
 It looks like accepting WritableComparable is what's done in the other Hive 
 OutputFormats, and there's no reason AvroContainerOutputFormat couldn't also 
 be changed, since it's ignoring the key. That way fixing things so 
 FileRecordWriterContainer can always use NullWritable could get spun into a 
 different issue?
 The underlying cause for failure to write to AvroSerde tables is that 
 AvroContainerOutputFormat doesn't meaningfully implement getRecordWriter, so 
 fixing the above will just push the failure into the placeholder RecordWriter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6807) add HCatStorer ORC test to test missing columns

2014-04-01 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-6807:
-

Attachment: HIVE-6807.patch

 add HCatStorer ORC test to test missing columns
 ---

 Key: HIVE-6807
 URL: https://issues.apache.org/jira/browse/HIVE-6807
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.13.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Attachments: HIVE-6807.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6807) add HCatStorer ORC test to test missing columns

2014-04-01 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-6807:
-

Status: Patch Available  (was: Open)

 add HCatStorer ORC test to test missing columns
 ---

 Key: HIVE-6807
 URL: https://issues.apache.org/jira/browse/HIVE-6807
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.13.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Attachments: HIVE-6807.patch






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HIVE-6785) query fails when partitioned table's table level serde is ParquetHiveSerDe and partition level serde is of different SerDe

2014-04-01 Thread Tongjie Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-6785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tongjie Chen updated HIVE-6785:
---

Attachment: HIVE-6785.1.patch.txt

 query fails when partitioned table's table level serde is ParquetHiveSerDe 
 and partition level serde is of different SerDe
 --

 Key: HIVE-6785
 URL: https://issues.apache.org/jira/browse/HIVE-6785
 Project: Hive
  Issue Type: Bug
  Components: File Formats, Serializers/Deserializers
Affects Versions: 0.13.0
Reporter: Tongjie Chen
 Attachments: HIVE-6785.1.patch.txt


 More specifically, if table contains string type columns. it will result in 
 the following exception Failed with exception 
 java.io.IOException:java.lang.ClassCastException: 
 parquet.hive.serde.primitive.ParquetStringInspector cannot be cast to 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.SettableTimestampObjectInspector
 see also in the following parquet issue:
 https://github.com/Parquet/parquet-mr/issues/324



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6808) sql std auth - describe table, show partitions are not being authorized

2014-04-01 Thread Thejas M Nair (JIRA)
Thejas M Nair created HIVE-6808:
---

 Summary: sql std auth - describe table, show partitions are not 
being authorized
 Key: HIVE-6808
 URL: https://issues.apache.org/jira/browse/HIVE-6808
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Affects Versions: 0.13.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair


Only users with SELECT privilege on the table should be able to do 'describe 
table' and 'show partitions'.




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HIVE-6785) query fails when partitioned table's table level serde is ParquetHiveSerDe and partition level serde is of different SerDe

2014-04-01 Thread Tongjie Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957258#comment-13957258
 ] 

Tongjie Chen commented on HIVE-6785:


This patch involves deleting file and adding new files (mv),  and there is no 
instruction to delete/add if using git in 
https://cwiki.apache.org/confluence/display/Hive/HowToContribute; however my 
patch is using git diff, if that does not work, I will resubmit a patch using 
svn.

https://reviews.apache.org/r/19896/

 query fails when partitioned table's table level serde is ParquetHiveSerDe 
 and partition level serde is of different SerDe
 --

 Key: HIVE-6785
 URL: https://issues.apache.org/jira/browse/HIVE-6785
 Project: Hive
  Issue Type: Bug
  Components: File Formats, Serializers/Deserializers
Affects Versions: 0.13.0
Reporter: Tongjie Chen
 Attachments: HIVE-6785.1.patch.txt


 More specifically, if table contains string type columns. it will result in 
 the following exception Failed with exception 
 java.io.IOException:java.lang.ClassCastException: 
 parquet.hive.serde.primitive.ParquetStringInspector cannot be cast to 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.SettableTimestampObjectInspector
 see also in the following parquet issue:
 https://github.com/Parquet/parquet-mr/issues/324



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >