[jira] [Created] (HIVE-5409) Enable vectorization for Tez

2013-10-01 Thread Gunther Hagleitner (JIRA)
Gunther Hagleitner created HIVE-5409:


 Summary: Enable vectorization for Tez
 Key: HIVE-5409
 URL: https://issues.apache.org/jira/browse/HIVE-5409
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Fix For: tez-branch


Enable the vectorization optimization on Tez



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5409) Enable vectorization for Tez

2013-10-01 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-5409:
-

Description: 
Enable the vectorization optimization on Tez
NO PRECOMMIT TESTS (WIP for Tez)

  was:Enable the vectorization optimization on Tez


 Enable vectorization for Tez
 

 Key: HIVE-5409
 URL: https://issues.apache.org/jira/browse/HIVE-5409
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Fix For: tez-branch


 Enable the vectorization optimization on Tez
 NO PRECOMMIT TESTS (WIP for Tez)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5409) Enable vectorization for Tez

2013-10-01 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-5409:
-

Attachment: HIVE-5409.1.patch

 Enable vectorization for Tez
 

 Key: HIVE-5409
 URL: https://issues.apache.org/jira/browse/HIVE-5409
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Fix For: tez-branch

 Attachments: HIVE-5409.1.patch


 Enable the vectorization optimization on Tez
 NO PRECOMMIT TESTS (WIP for Tez)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4501) HS2 memory leak - FileSystem objects in FileSystem.CACHE

2013-10-01 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13782668#comment-13782668
 ] 

Thejas M Nair commented on HIVE-4501:
-

bq. Are you saying that the same FS ref. can be used across UGIs? 
No, that is not happening in current code. But this patch will accumulate FS 
refs that belong to different sessions and when a session is closed it can end 
up closing FS objects that are associated with other sessions, and not closing 
the FS handles that belong to the session being closed. 
The change in TUGIContainingProcessor will just accumulate FS handles to a 
thread local variable.  But that thread can be used by different hive sessions. 
There is no 1:1 mapping between hive server 2 threads and hive sessions. I hope 
this clarifies.



 HS2 memory leak - FileSystem objects in FileSystem.CACHE
 

 Key: HIVE-4501
 URL: https://issues.apache.org/jira/browse/HIVE-4501
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.11.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
Priority: Critical
 Attachments: HIVE-4501.1.patch, HIVE-4501.1.patch, HIVE-4501.1.patch, 
 HIVE-4501.trunk.patch


 org.apache.hadoop.fs.FileSystem objects are getting accumulated in 
 FileSystem.CACHE, with HS2 in unsecure mode.
 As a workaround, it is possible to set fs.hdfs.impl.disable.cache and 
 fs.file.impl.disable.cache to true.
 Users should not have to bother with this extra configuration. 
 As a workaround disable impersonation by setting hive.server2.enable.doAs to 
 false.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4501) HS2 memory leak - FileSystem objects in FileSystem.CACHE

2013-10-01 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13782669#comment-13782669
 ] 

Thejas M Nair commented on HIVE-4501:
-

bq. The change in TUGIContainingProcessor will just accumulate FS handles to a 
thread local variable.
Correction (i meant to say): The change in TUGIContainingProcessor will just 
accumulate UGI objects to a thread local variable. The UGI objects being 
accumulated in the thread local hashset can belong to different sessions.



 HS2 memory leak - FileSystem objects in FileSystem.CACHE
 

 Key: HIVE-4501
 URL: https://issues.apache.org/jira/browse/HIVE-4501
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.11.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
Priority: Critical
 Attachments: HIVE-4501.1.patch, HIVE-4501.1.patch, HIVE-4501.1.patch, 
 HIVE-4501.trunk.patch


 org.apache.hadoop.fs.FileSystem objects are getting accumulated in 
 FileSystem.CACHE, with HS2 in unsecure mode.
 As a workaround, it is possible to set fs.hdfs.impl.disable.cache and 
 fs.file.impl.disable.cache to true.
 Users should not have to bother with this extra configuration. 
 As a workaround disable impersonation by setting hive.server2.enable.doAs to 
 false.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5325) Implement statistics providing ORC writer and reader interfaces

2013-10-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13782686#comment-13782686
 ] 

Hive QA commented on HIVE-5325:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12606028/HIVE-5325.3.patch.txt

{color:green}SUCCESS:{color} +1 4077 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/978/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/978/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

 Implement statistics providing ORC writer and reader interfaces
 ---

 Key: HIVE-5325
 URL: https://issues.apache.org/jira/browse/HIVE-5325
 Project: Hive
  Issue Type: New Feature
Affects Versions: 0.13.0
Reporter: Prasanth J
Assignee: Prasanth J
  Labels: orcfile, statistics
 Fix For: 0.13.0

 Attachments: HIVE-5325.1.patch.txt, HIVE-5325.2.patch.txt, 
 HIVE-5325.3.patch.txt, HIVE-5325-java-only.1.patch.txt, 
 HIVE-5325-java-only.2.patch.txt, HIVE-5325-java-only.3.patch.txt


 HIVE-5324 adds new interfaces that can be implemented by ORC reader/writer to 
 provide statistics. Writer provided statistics is used to update 
 table/partition level statistics in metastore. Reader provided statistics can 
 be used for reducer estimation, CBO etc. in the absence of metastore 
 statistics.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: Error - loading data into tables

2013-10-01 Thread Nitin Pawar
Manickam,

I am really not sure if hive supports Federated namespaces yet.
I have cc'd dev list.

May be any of the core hive developers will be able to tell how to load
data using hive on a federated hdfs.


On Tue, Oct 1, 2013 at 12:59 PM, Manickam P manicka...@outlook.com wrote:

 Hi Pawar,

 I tried that option but not working. I have a federated HDFS cluster and
 given below is my core site xml.

 I created the HDFS directory inside that /home/storage/mount1 and tried to
 load the file now also i'm getting the same error.

 Can you pls tell me what mistake i'm doing here? bcoz i dont have any clue.


 *configuration*
 * *
 * property*
 * namefs.default.name/name*
 * valueviewfs:value*
 * /property*
 * property*
 * namefs.viewfs.mounttable.default.link./home/storage/mount1/name*
 * valuehdfs://10.108.99.68:8020/value*
 * /property*
 * property*
 * namefs.viewfs.mounttable.default.link./home/storage/mount2/name*
 * valuehdfs://10.108.99.69:8020/value*
 * /property   *
 */configuration*


 Thanks,
 Manickam Ppa

 --
 Date: Mon, 30 Sep 2013 21:53:03 +0530
 Subject: Re: Error - loading data into tables
 From: nitinpawar...@gmail.com
 To: u...@hive.apache.org


 Is this /home/strorage/... a hdfs directory?
 I think its a normal filesystem directory.

 Try running this
 load data local inpath '*/home/storage/mount1/tabled.txt' INTO TABLE TEST;
 *


 On Mon, Sep 30, 2013 at 7:13 PM, Manickam P manicka...@outlook.comwrote:

 Hi,

 I'm getting the below error while loading the data into hive table.
 *return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask*
 *
 *
 I used * LOAD DATA INPATH '/home/storage/mount1/tabled.txt' INTO TABLE
 TEST;* this query to insert into table.


 Thanks,
 Manickam P




 --
 Nitin Pawar




-- 
Nitin Pawar


[jira] [Updated] (HIVE-5407) show create table creating unusable DDL when some reserved keywords exist

2013-10-01 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-5407:
--

Attachment: D13191.1.patch

code6 requested code review of HIVE-5407 [jira] show create table creating 
unusable DDL when some reserved keywords  exist.

Reviewers: JIRA

HIVE-5407

HIVE-701 already makes most reserved keywords available for 
table/column/partition names and 'show create table' produces usable DDLs.
However I think it's better if we quote table/column/partition names for the 
output of 'show create table', which is how mysql works and seems more robust.

For example, use select as column name will produce unusable DDL:

create table table_select(`select` string);
show create table table_select;

TEST PLAN
  EMPTY

REVISION DETAIL
  https://reviews.facebook.net/D13191

AFFECTED FILES
  ql/src/java/org/apache/hadoop/hive/ql/exec/DDLTask.java
  ql/src/test/results/clientpositive/show_create_table_alter.q.out
  ql/src/test/results/clientpositive/show_create_table_db_table.q.out
  ql/src/test/results/clientpositive/show_create_table_delimited.q.out
  ql/src/test/results/clientpositive/show_create_table_partitioned.q.out
  ql/src/test/results/clientpositive/show_create_table_serde.q.out
  ql/src/test/results/clientpositive/show_create_table_view.q.out

MANAGE HERALD RULES
  https://reviews.facebook.net/herald/view/differential/

WHY DID I GET THIS EMAIL?
  https://reviews.facebook.net/herald/transcript/38757/

To: JIRA, code6


 show create table creating unusable DDL when some reserved keywords  exist
 --

 Key: HIVE-5407
 URL: https://issues.apache.org/jira/browse/HIVE-5407
 Project: Hive
  Issue Type: Bug
  Components: CLI
 Environment: hive 0.11
Reporter: Zhichun Wu
Priority: Minor
 Attachments: D13191.1.patch


 HIVE-701 already makes most reserved keywords available for 
 table/column/partition names and 'show create table' produces usable DDLs.
 However I think it's better if we quote table/column/partition names for the 
 output of 'show create table', which is how mysql works and seems more robust.
 For example, use select as column name will produce unusable DDL:
 {code}
 create table table_select(`select` string);
 show create table table_select;
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5407) show create table creating unusable DDL when some reserved keywords exist

2013-10-01 Thread Zhichun Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhichun Wu updated HIVE-5407:
-

Attachment: HIVE-5407.1.patch

 show create table creating unusable DDL when some reserved keywords  exist
 --

 Key: HIVE-5407
 URL: https://issues.apache.org/jira/browse/HIVE-5407
 Project: Hive
  Issue Type: Bug
  Components: CLI
 Environment: hive 0.11
Reporter: Zhichun Wu
Priority: Minor
 Attachments: D13191.1.patch


 HIVE-701 already makes most reserved keywords available for 
 table/column/partition names and 'show create table' produces usable DDLs.
 However I think it's better if we quote table/column/partition names for the 
 output of 'show create table', which is how mysql works and seems more robust.
 For example, use select as column name will produce unusable DDL:
 {code}
 create table table_select(`select` string);
 show create table table_select;
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5407) show create table creating unusable DDL when some reserved keywords exist

2013-10-01 Thread Zhichun Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhichun Wu updated HIVE-5407:
-

Attachment: (was: HIVE-5407.1.patch)

 show create table creating unusable DDL when some reserved keywords  exist
 --

 Key: HIVE-5407
 URL: https://issues.apache.org/jira/browse/HIVE-5407
 Project: Hive
  Issue Type: Bug
  Components: CLI
 Environment: hive 0.11
Reporter: Zhichun Wu
Priority: Minor
 Attachments: D13191.1.patch


 HIVE-701 already makes most reserved keywords available for 
 table/column/partition names and 'show create table' produces usable DDLs.
 However I think it's better if we quote table/column/partition names for the 
 output of 'show create table', which is how mysql works and seems more robust.
 For example, use select as column name will produce unusable DDL:
 {code}
 create table table_select(`select` string);
 show create table table_select;
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5407) show create table creating unusable DDL when some reserved keywords exist

2013-10-01 Thread Zhichun Wu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhichun Wu updated HIVE-5407:
-

Status: Patch Available  (was: Open)

 show create table creating unusable DDL when some reserved keywords  exist
 --

 Key: HIVE-5407
 URL: https://issues.apache.org/jira/browse/HIVE-5407
 Project: Hive
  Issue Type: Bug
  Components: CLI
 Environment: hive 0.11
Reporter: Zhichun Wu
Priority: Minor
 Attachments: D13191.1.patch


 HIVE-701 already makes most reserved keywords available for 
 table/column/partition names and 'show create table' produces usable DDLs.
 However I think it's better if we quote table/column/partition names for the 
 output of 'show create table', which is how mysql works and seems more robust.
 For example, use select as column name will produce unusable DDL:
 {code}
 create table table_select(`select` string);
 show create table table_select;
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5405) Need to implement PersistenceDelegate for org.antlr.runtime.CommonToken

2013-10-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13782750#comment-13782750
 ] 

Hive QA commented on HIVE-5405:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12606026/HIVE-5405.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 4070 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucket_num_reducers
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/979/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/979/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

 Need to implement PersistenceDelegate for org.antlr.runtime.CommonToken
 ---

 Key: HIVE-5405
 URL: https://issues.apache.org/jira/browse/HIVE-5405
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: shanyu zhao
Assignee: shanyu zhao
 Attachments: HIVE-5405.patch


 Prior to HIVE-1511, running hive join operation results in the following 
 exception:
 java.lang.RuntimeException: Cannot serialize object
 at 
 org.apache.hadoop.hive.ql.exec.Utilities$1.exceptionThrown(Utilities.
 java:639)
 at java.beans.XMLEncoder.writeStatement(XMLEncoder.java:426)
 ...
 Caused by: java.lang.InstantiationException: org.antlr.runtime.CommonToken
 at java.lang.Class.newInstance0(Class.java:357)
 at java.lang.Class.newInstance(Class.java:325)
 at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
 sorImpl.java:43)
 HIVE-1511 introduced a new (and set to default) hive plan serialization 
 format Kryo, which fixed this problem by implementing the Kryo serializer for 
 CommonToken. However, if we set the following in configuration file:
 property
 namehive.plan.serialization.format/name
 valuejavaXML/value
 /property
 We'll see the same failure as before. We need to implement a 
 PersistenceDelegate for the situation when javaXML is set to serialization 
 format.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5036) [WebHCat] Add cmd script for WebHCat

2013-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13782758#comment-13782758
 ] 

Hudson commented on HIVE-5036:
--

FAILURE: Integrated in Hive-trunk-hadoop2 #467 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/467/])
HIVE-5036: [WebHCat] Add cmd script for WebHCat (Daniel Dai via Thejas Nair) 
(thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1527837)
* /hive/trunk/hcatalog/bin/templeton.cmd
* /hive/trunk/hcatalog/build-support/checkstyle/apache_header.txt
* /hive/trunk/hcatalog/build.xml


 [WebHCat] Add cmd script for WebHCat
 

 Key: HIVE-5036
 URL: https://issues.apache.org/jira/browse/HIVE-5036
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 0.12.0

 Attachments: HIVE-5036-1.patch, HIVE-5036-2.patch


 NO PRECOMMIT TESTS 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5035) [WebHCat] Hardening parameters for Windows

2013-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13782759#comment-13782759
 ] 

Hudson commented on HIVE-5035:
--

FAILURE: Integrated in Hive-trunk-hadoop2 #467 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/467/])
HIVE-5035: [WebHCat] Hardening parameters for Windows (Daniel Dai via Thejas 
Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1527835)
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/HiveDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/JarDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/PigDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/Server.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/StreamingDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonUtils.java


 [WebHCat] Hardening parameters for Windows
 --

 Key: HIVE-5035
 URL: https://issues.apache.org/jira/browse/HIVE-5035
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 0.12.0

 Attachments: HIVE-5035-1.patch, HIVE-5035-2.patch


 everything pass to pig/hive/hadoop command line will be quoted. That include:
 mapreducejar:
 libjars
 arg
 define
 mapreducestream:
 cmdenv
 define
 arg
 pig
 arg
 execute
 hive
 arg
 define
 execute
 NO PRECOMMIT TESTS 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5283) Merge vectorization branch to trunk

2013-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13782760#comment-13782760
 ] 

Hudson commented on HIVE-5283:
--

FAILURE: Integrated in Hive-trunk-hadoop2 #467 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/467/])
HIVE-5283 : Merge vectorization branch to trunk (Jitendra Nath Pandey via 
Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1527858)
* /hive/trunk
* /hive/trunk/ant/src/org/apache/hadoop/hive/ant/GenVectorCode.java
* /hive/trunk/ant/src/org/apache/hadoop/hive/ant/GenVectorTestCode.java
* /hive/trunk/build-common.xml
* /hive/trunk/build.xml
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* /hive/trunk/data/files/alltypesorc
* /hive/trunk/ql/build.xml
* /hive/trunk/ql/src/gen/vectorization
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FetchOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FilterOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/GroupByOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/KeyWrapper.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/OperatorFactory.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ReduceSinkOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecDriver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/CommonRCFileInputFormat.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/VectorizedRCFileInputFormat.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/VectorizedRCFileRecordReader.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/BitFieldReader.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/DynamicByteArray.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/IntegerReader.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcSerde.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/RecordReader.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/RecordReaderImpl.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/RunLengthByteReader.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/RunLengthIntegerReader.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/RunLengthIntegerReaderV2.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/VectorizedOrcInputFormat.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/VectorizedOrcSerde.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/PhysicalOptimizer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/Vectorizer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/AbstractOperatorDesc.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeGenericFuncDesc.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/MapWork.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFHex.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/util/JavaDataModel.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/QTestUtil.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/vector
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestVectorizedORCReader.java
* /hive/trunk/ql/src/test/queries/clientpositive/vectorization_short_regress.q
* /hive/trunk/ql/src/test/queries/clientpositive/vectorized_rcfile_columnar.q
* /hive/trunk/ql/src/test/results/clientpositive/add_part_exist.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter5.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter_index.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter_rename_partition.q.out
* /hive/trunk/ql/src/test/results/clientpositive/describe_table_json.q.out
* /hive/trunk/ql/src/test/results/clientpositive/index_creation.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/plan_json.q.out
* 

[jira] [Commented] (HIVE-5066) [WebHCat] Other code fixes for Windows

2013-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13782761#comment-13782761
 ] 

Hudson commented on HIVE-5066:
--

FAILURE: Integrated in Hive-trunk-hadoop2 #467 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/467/])
HIVE-5066: [WebHCat] Other code fixes for Windows (Daniel Dai via Thejas Nair) 
(thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1527840)
* /hive/trunk/hcatalog/webhcat/svr/src/main/config/webhcat-default.xml
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/AppConfig.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/ExecServiceImpl.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/HcatDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/HiveDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/JarDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/PigDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/HDFSCleanup.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/HDFSStorage.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonControllerJob.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TrivialExecService.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/test/java/org/apache/hive/hcatalog/templeton/TestServer.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/test/java/org/apache/hive/hcatalog/templeton/tool/TestTempletonUtils.java


 [WebHCat] Other code fixes for Windows
 --

 Key: HIVE-5066
 URL: https://issues.apache.org/jira/browse/HIVE-5066
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 0.12.0

 Attachments: HIVE-5034-1.patch, HIVE-5066-2.patch, HIVE-5066-3.patch, 
 HIVE-5066-4.patch


 This is equivalent to HCATALOG-526, but updated to sync with latest trunk.
 NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Missing graphics for Hive Web Interface wiki

2013-10-01 Thread Lefty Leverenz
The images for the Hive Web Interface wikidoc are missing
(HiveWebInterfacehttps://cwiki.apache.org/confluence/display/Hive/HiveWebInterface).
 I'd like to restore or replace them.

A little sleuthing revealed that Edward Capriolo added them to the doc
originally in 2009:

+ == Walk through ==
+ === Authorize ===
+ attachment:1_hwi_authorize.png
+ attachment:2_hwi_authorize.png
+ === Schema Browser ===
+ attachment:3_schema_table.png
+ attachment:4_schema_browser.png
+ === Diagnostics ===
+ attachment:5_diagnostic.png
+ === Running a query ===
+ attachment:6_newsession.png
+ attachment:7_session_runquery.png
+ attachment:8_session_query_1.png
+ attachment:9_file_view.png

(See
http://mail-archives.apache.org/mod_mbox/hadoop-common-commits/200903.mbox/%3c20090304044542.23397.18...@aurora.apache.org%3E
.)

Does anyone have copies?  Or would someone like to supply new screen shots?

-- Lefty


[jira] [Updated] (HIVE-5296) Memory leak: OOM Error after multiple open/closed JDBC connections.

2013-10-01 Thread Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Douglas updated HIVE-5296:
--

  Description: 
Multiple connections to Hiveserver2, all of which are closed and disposed of 
properly show the Java heap size to grow extremely quickly. 

This issue can be recreated using the following code

{code}

import java.sql.DriverManager;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.Properties;

import org.apache.hive.service.cli.HiveSQLException;
import org.apache.log4j.Logger;

/*
 * Class which encapsulates the lifecycle of a query or statement.
 * Provides functionality which allows you to create a connection
 */

public class HiveClient {

Connection con;
Logger logger;
private static String driverName = org.apache.hive.jdbc.HiveDriver;   
private String db;


public HiveClient(String db)
{   
logger = Logger.getLogger(HiveClient.class);
this.db=db;

try{
 Class.forName(driverName);
}catch(ClassNotFoundException e){
logger.info(Can't find Hive driver);
}

String hiveHost = GlimmerServer.config.getString(hive/host);
String hivePort = GlimmerServer.config.getString(hive/port);
String connectionString = jdbc:hive2://+hiveHost+:+hivePort 
+/default;
logger.info(String.format(Attempting to connect to 
%s,connectionString));
try{
con = 
DriverManager.getConnection(connectionString,,);
  
}catch(Exception e){
logger.error(Problem instantiating the 
connection+e.getMessage());
}   
}

public int update(String query) 
{
Integer res = 0;
Statement stmt = null;
try{
stmt = con.createStatement();
String switchdb = USE +db;
logger.info(switchdb);  
stmt.executeUpdate(switchdb);
logger.info(query);
res = stmt.executeUpdate(query);
logger.info(Query passed to server);  
stmt.close();
}catch(HiveSQLException e){
logger.info(String.format(HiveSQLException thrown, 
this can be valid,  +
but check the error: %s from the query 
%s,query,e.toString()));
}catch(SQLException e){
logger.error(String.format(Unable to execute query 
SQLException %s. Error: %s,query,e));
}catch(Exception e){
logger.error(String.format(Unable to execute query %s. 
Error: %s,query,e));
}

if(stmt!=null)
try{
stmt.close();
}catch(SQLException e){
logger.error(Cannot close the statment, 
potentially memory leak +e);
}

return res;
}

public void close()
{
if(con!=null){
try {
con.close();
} catch (SQLException e) {  
logger.info(Problem closing connection +e);
}
}
}



}
{code}

And by creating and closing many HiveClient objects. The heap space used by the 
hiveserver2 runjar process is seen to increase extremely quickly, without such 
space being released.

  was:
This error seems to relate to https://issues.apache.org/jira/browse/HIVE-3481

However, on inspection of the related patch and my built version of Hive (patch 
carried forward to 0.12.0), I am still seeing the described behaviour.

Multiple connections to Hiveserver2, all of which are closed and disposed of 
properly show the Java heap size to grow extremely quickly. 

This issue can be recreated using the following code

{code}

import java.sql.DriverManager;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.Properties;

import org.apache.hive.service.cli.HiveSQLException;
import org.apache.log4j.Logger;

/*
 * Class which encapsulates the lifecycle of a query or statement.
 * Provides functionality which 

[jira] [Commented] (HIVE-5296) Memory leak: OOM Error after multiple open/closed JDBC connections.

2013-10-01 Thread Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13782799#comment-13782799
 ] 

Douglas commented on HIVE-5296:
---

Ok thanks -- In this case, my system was opening a lot of connections with 
fewer file handles ([HIVE-4501|https://issues.apache.org/jira/browse/HIVE-4501] 
is file handle memory leak). Some of these connections/queries also threw 
Exceptions -- which probably exacerbated the problem. I'll watch the other 
issue, but if I see see steady heap usage over time as the total number of 
connections increases, we can mark resolved.

 Memory leak: OOM Error after multiple open/closed JDBC connections. 
 

 Key: HIVE-5296
 URL: https://issues.apache.org/jira/browse/HIVE-5296
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.12.0, 0.13.0
 Environment: Hive 0.12.0, Hadoop 1.1.2, Debian.
Reporter: Douglas
  Labels: hiveserver
 Fix For: 0.12.0, 0.13.0

 Attachments: HIVE-5296.1.patch, HIVE-5296.2.patch, HIVE-5296.patch, 
 HIVE-5296.patch, HIVE-5296.patch

   Original Estimate: 168h
  Remaining Estimate: 168h

 This error seems to relate to https://issues.apache.org/jira/browse/HIVE-3481
 However, on inspection of the related patch and my built version of Hive 
 (patch carried forward to 0.12.0), I am still seeing the described behaviour.
 Multiple connections to Hiveserver2, all of which are closed and disposed of 
 properly show the Java heap size to grow extremely quickly. 
 This issue can be recreated using the following code
 {code}
 import java.sql.DriverManager;
 import java.sql.Connection;
 import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.sql.Statement;
 import java.util.Properties;
 import org.apache.hive.service.cli.HiveSQLException;
 import org.apache.log4j.Logger;
 /*
  * Class which encapsulates the lifecycle of a query or statement.
  * Provides functionality which allows you to create a connection
  */
 public class HiveClient {
   
   Connection con;
   Logger logger;
   private static String driverName = org.apache.hive.jdbc.HiveDriver;   
   private String db;
   
   
   public HiveClient(String db)
   {   
   logger = Logger.getLogger(HiveClient.class);
   this.db=db;
   
   try{
Class.forName(driverName);
   }catch(ClassNotFoundException e){
   logger.info(Can't find Hive driver);
   }
   
   String hiveHost = GlimmerServer.config.getString(hive/host);
   String hivePort = GlimmerServer.config.getString(hive/port);
   String connectionString = jdbc:hive2://+hiveHost+:+hivePort 
 +/default;
   logger.info(String.format(Attempting to connect to 
 %s,connectionString));
   try{
   con = 
 DriverManager.getConnection(connectionString,,);  
 
   }catch(Exception e){
   logger.error(Problem instantiating the 
 connection+e.getMessage());
   }   
   }
   
   public int update(String query) 
   {
   Integer res = 0;
   Statement stmt = null;
   try{
   stmt = con.createStatement();
   String switchdb = USE +db;
   logger.info(switchdb);  
   stmt.executeUpdate(switchdb);
   logger.info(query);
   res = stmt.executeUpdate(query);
   logger.info(Query passed to server);  
   stmt.close();
   }catch(HiveSQLException e){
   logger.info(String.format(HiveSQLException thrown, 
 this can be valid,  +
   but check the error: %s from the query 
 %s,query,e.toString()));
   }catch(SQLException e){
   logger.error(String.format(Unable to execute query 
 SQLException %s. Error: %s,query,e));
   }catch(Exception e){
   logger.error(String.format(Unable to execute query %s. 
 Error: %s,query,e));
   }
   
   if(stmt!=null)
   try{
   stmt.close();
   }catch(SQLException e){
   logger.error(Cannot close the statment, 
 potentially memory leak +e);
   }
   
   return res;
   }
   
   public void 

[jira] [Commented] (HIVE-5403) Move loading of filesystem, ugi, metastore client to hive session

2013-10-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13782804#comment-13782804
 ] 

Hive QA commented on HIVE-5403:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12606025/HIVE-5403.1.patch

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 4070 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_bucket_num_reducers
org.apache.hadoop.hive.metastore.TestMetastoreVersion.testVersionRestriction
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/980/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/980/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

 Move loading of filesystem, ugi, metastore client to hive session
 -

 Key: HIVE-5403
 URL: https://issues.apache.org/jira/browse/HIVE-5403
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Attachments: HIVE-5403.1.patch


 As part of HIVE-5184, the metastore connection, loading filesystem were done 
 as part of the tez session so as to speed up query times while paying a cost 
 at startup. We can do this more generally in hive to apply to both the 
 mapreduce and tez side of things.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5036) [WebHCat] Add cmd script for WebHCat

2013-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13782895#comment-13782895
 ] 

Hudson commented on HIVE-5036:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #121 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/121/])
HIVE-5036: [WebHCat] Add cmd script for WebHCat (Daniel Dai via Thejas Nair) 
(thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1527837)
* /hive/trunk/hcatalog/bin/templeton.cmd
* /hive/trunk/hcatalog/build-support/checkstyle/apache_header.txt
* /hive/trunk/hcatalog/build.xml


 [WebHCat] Add cmd script for WebHCat
 

 Key: HIVE-5036
 URL: https://issues.apache.org/jira/browse/HIVE-5036
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 0.12.0

 Attachments: HIVE-5036-1.patch, HIVE-5036-2.patch


 NO PRECOMMIT TESTS 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5035) [WebHCat] Hardening parameters for Windows

2013-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13782896#comment-13782896
 ] 

Hudson commented on HIVE-5035:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #121 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/121/])
HIVE-5035: [WebHCat] Hardening parameters for Windows (Daniel Dai via Thejas 
Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1527835)
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/HiveDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/JarDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/PigDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/Server.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/StreamingDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonUtils.java


 [WebHCat] Hardening parameters for Windows
 --

 Key: HIVE-5035
 URL: https://issues.apache.org/jira/browse/HIVE-5035
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 0.12.0

 Attachments: HIVE-5035-1.patch, HIVE-5035-2.patch


 everything pass to pig/hive/hadoop command line will be quoted. That include:
 mapreducejar:
 libjars
 arg
 define
 mapreducestream:
 cmdenv
 define
 arg
 pig
 arg
 execute
 hive
 arg
 define
 execute
 NO PRECOMMIT TESTS 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5283) Merge vectorization branch to trunk

2013-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13782897#comment-13782897
 ] 

Hudson commented on HIVE-5283:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #121 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/121/])
HIVE-5283 : Merge vectorization branch to trunk (Jitendra Nath Pandey via 
Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1527858)
* /hive/trunk
* /hive/trunk/ant/src/org/apache/hadoop/hive/ant/GenVectorCode.java
* /hive/trunk/ant/src/org/apache/hadoop/hive/ant/GenVectorTestCode.java
* /hive/trunk/build-common.xml
* /hive/trunk/build.xml
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* /hive/trunk/data/files/alltypesorc
* /hive/trunk/ql/build.xml
* /hive/trunk/ql/src/gen/vectorization
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FetchOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FilterOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/GroupByOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/KeyWrapper.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/OperatorFactory.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ReduceSinkOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecDriver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/CommonRCFileInputFormat.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/VectorizedRCFileInputFormat.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/VectorizedRCFileRecordReader.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/BitFieldReader.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/DynamicByteArray.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/IntegerReader.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcSerde.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/RecordReader.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/RecordReaderImpl.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/RunLengthByteReader.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/RunLengthIntegerReader.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/RunLengthIntegerReaderV2.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/VectorizedOrcInputFormat.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/VectorizedOrcSerde.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/PhysicalOptimizer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/Vectorizer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/AbstractOperatorDesc.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeGenericFuncDesc.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/MapWork.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFHex.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/util/JavaDataModel.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/QTestUtil.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/vector
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestVectorizedORCReader.java
* /hive/trunk/ql/src/test/queries/clientpositive/vectorization_short_regress.q
* /hive/trunk/ql/src/test/queries/clientpositive/vectorized_rcfile_columnar.q
* /hive/trunk/ql/src/test/results/clientpositive/add_part_exist.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter5.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter_index.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter_rename_partition.q.out
* /hive/trunk/ql/src/test/results/clientpositive/describe_table_json.q.out
* /hive/trunk/ql/src/test/results/clientpositive/index_creation.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/plan_json.q.out
* 

[jira] [Commented] (HIVE-5066) [WebHCat] Other code fixes for Windows

2013-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13782898#comment-13782898
 ] 

Hudson commented on HIVE-5066:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #121 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/121/])
HIVE-5066: [WebHCat] Other code fixes for Windows (Daniel Dai via Thejas Nair) 
(thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1527840)
* /hive/trunk/hcatalog/webhcat/svr/src/main/config/webhcat-default.xml
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/AppConfig.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/ExecServiceImpl.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/HcatDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/HiveDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/JarDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/PigDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/HDFSCleanup.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/HDFSStorage.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonControllerJob.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TrivialExecService.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/test/java/org/apache/hive/hcatalog/templeton/TestServer.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/test/java/org/apache/hive/hcatalog/templeton/tool/TestTempletonUtils.java


 [WebHCat] Other code fixes for Windows
 --

 Key: HIVE-5066
 URL: https://issues.apache.org/jira/browse/HIVE-5066
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 0.12.0

 Attachments: HIVE-5034-1.patch, HIVE-5066-2.patch, HIVE-5066-3.patch, 
 HIVE-5066-4.patch


 This is equivalent to HCATALOG-526, but updated to sync with latest trunk.
 NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5405) Need to implement PersistenceDelegate for org.antlr.runtime.CommonToken

2013-10-01 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13782920#comment-13782920
 ] 

Brock Noland commented on HIVE-5405:


I don't think the failure is related. I am +1 on the patch, but I do have a 
question, did the patch fix your issue as the unit tests won't test this format.

 Need to implement PersistenceDelegate for org.antlr.runtime.CommonToken
 ---

 Key: HIVE-5405
 URL: https://issues.apache.org/jira/browse/HIVE-5405
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: shanyu zhao
Assignee: shanyu zhao
 Attachments: HIVE-5405.patch


 Prior to HIVE-1511, running hive join operation results in the following 
 exception:
 java.lang.RuntimeException: Cannot serialize object
 at 
 org.apache.hadoop.hive.ql.exec.Utilities$1.exceptionThrown(Utilities.
 java:639)
 at java.beans.XMLEncoder.writeStatement(XMLEncoder.java:426)
 ...
 Caused by: java.lang.InstantiationException: org.antlr.runtime.CommonToken
 at java.lang.Class.newInstance0(Class.java:357)
 at java.lang.Class.newInstance(Class.java:325)
 at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
 sorImpl.java:43)
 HIVE-1511 introduced a new (and set to default) hive plan serialization 
 format Kryo, which fixed this problem by implementing the Kryo serializer for 
 CommonToken. However, if we set the following in configuration file:
 property
 namehive.plan.serialization.format/name
 valuejavaXML/value
 /property
 We'll see the same failure as before. We need to implement a 
 PersistenceDelegate for the situation when javaXML is set to serialization 
 format.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5036) [WebHCat] Add cmd script for WebHCat

2013-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13782988#comment-13782988
 ] 

Hudson commented on HIVE-5036:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #187 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/187/])
HIVE-5036: [WebHCat] Add cmd script for WebHCat (Daniel Dai via Thejas Nair) 
(thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1527837)
* /hive/trunk/hcatalog/bin/templeton.cmd
* /hive/trunk/hcatalog/build-support/checkstyle/apache_header.txt
* /hive/trunk/hcatalog/build.xml


 [WebHCat] Add cmd script for WebHCat
 

 Key: HIVE-5036
 URL: https://issues.apache.org/jira/browse/HIVE-5036
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 0.12.0

 Attachments: HIVE-5036-1.patch, HIVE-5036-2.patch


 NO PRECOMMIT TESTS 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5283) Merge vectorization branch to trunk

2013-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13782990#comment-13782990
 ] 

Hudson commented on HIVE-5283:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #187 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/187/])
HIVE-5283 : Merge vectorization branch to trunk (Jitendra Nath Pandey via 
Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1527858)
* /hive/trunk
* /hive/trunk/ant/src/org/apache/hadoop/hive/ant/GenVectorCode.java
* /hive/trunk/ant/src/org/apache/hadoop/hive/ant/GenVectorTestCode.java
* /hive/trunk/build-common.xml
* /hive/trunk/build.xml
* /hive/trunk/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
* /hive/trunk/conf/hive-default.xml.template
* /hive/trunk/data/files/alltypesorc
* /hive/trunk/ql/build.xml
* /hive/trunk/ql/src/gen/vectorization
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FetchOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/FilterOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/GroupByOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/KeyWrapper.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Operator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/OperatorFactory.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ReduceSinkOperator.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecDriver.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/vector
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/CommonRCFileInputFormat.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/VectorizedRCFileInputFormat.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/VectorizedRCFileRecordReader.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/BitFieldReader.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/DynamicByteArray.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/IntegerReader.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcSerde.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/RecordReader.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/RecordReaderImpl.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/RunLengthByteReader.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/RunLengthIntegerReader.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/RunLengthIntegerReaderV2.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/VectorizedOrcInputFormat.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/VectorizedOrcSerde.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/PhysicalOptimizer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/physical/Vectorizer.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/AbstractOperatorDesc.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeGenericFuncDesc.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/MapWork.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFHex.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/util/JavaDataModel.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/QTestUtil.java
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/vector
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestVectorizedORCReader.java
* /hive/trunk/ql/src/test/queries/clientpositive/vectorization_short_regress.q
* /hive/trunk/ql/src/test/queries/clientpositive/vectorized_rcfile_columnar.q
* /hive/trunk/ql/src/test/results/clientpositive/add_part_exist.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter5.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter_index.q.out
* /hive/trunk/ql/src/test/results/clientpositive/alter_rename_partition.q.out
* /hive/trunk/ql/src/test/results/clientpositive/describe_table_json.q.out
* /hive/trunk/ql/src/test/results/clientpositive/index_creation.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/plan_json.q.out
* 

[jira] [Commented] (HIVE-5035) [WebHCat] Hardening parameters for Windows

2013-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13782989#comment-13782989
 ] 

Hudson commented on HIVE-5035:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #187 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/187/])
HIVE-5035: [WebHCat] Hardening parameters for Windows (Daniel Dai via Thejas 
Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1527835)
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/HiveDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/JarDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/PigDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/Server.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/StreamingDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonUtils.java


 [WebHCat] Hardening parameters for Windows
 --

 Key: HIVE-5035
 URL: https://issues.apache.org/jira/browse/HIVE-5035
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 0.12.0

 Attachments: HIVE-5035-1.patch, HIVE-5035-2.patch


 everything pass to pig/hive/hadoop command line will be quoted. That include:
 mapreducejar:
 libjars
 arg
 define
 mapreducestream:
 cmdenv
 define
 arg
 pig
 arg
 execute
 hive
 arg
 define
 execute
 NO PRECOMMIT TESTS 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5066) [WebHCat] Other code fixes for Windows

2013-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13782991#comment-13782991
 ] 

Hudson commented on HIVE-5066:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #187 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/187/])
HIVE-5066: [WebHCat] Other code fixes for Windows (Daniel Dai via Thejas Nair) 
(thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1527840)
* /hive/trunk/hcatalog/webhcat/svr/src/main/config/webhcat-default.xml
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/AppConfig.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/ExecServiceImpl.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/HcatDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/HiveDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/JarDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/PigDelegator.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/HDFSCleanup.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/HDFSStorage.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TempletonControllerJob.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/main/java/org/apache/hive/hcatalog/templeton/tool/TrivialExecService.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/test/java/org/apache/hive/hcatalog/templeton/TestServer.java
* 
/hive/trunk/hcatalog/webhcat/svr/src/test/java/org/apache/hive/hcatalog/templeton/tool/TestTempletonUtils.java


 [WebHCat] Other code fixes for Windows
 --

 Key: HIVE-5066
 URL: https://issues.apache.org/jira/browse/HIVE-5066
 Project: Hive
  Issue Type: Sub-task
  Components: HCatalog
Reporter: Daniel Dai
Assignee: Daniel Dai
 Fix For: 0.12.0

 Attachments: HIVE-5034-1.patch, HIVE-5066-2.patch, HIVE-5066-3.patch, 
 HIVE-5066-4.patch


 This is equivalent to HCATALOG-526, but updated to sync with latest trunk.
 NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5325) Implement statistics providing ORC writer and reader interfaces

2013-10-01 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-5325:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Prasanth!

 Implement statistics providing ORC writer and reader interfaces
 ---

 Key: HIVE-5325
 URL: https://issues.apache.org/jira/browse/HIVE-5325
 Project: Hive
  Issue Type: New Feature
Affects Versions: 0.13.0
Reporter: Prasanth J
Assignee: Prasanth J
  Labels: orcfile, statistics
 Fix For: 0.13.0

 Attachments: HIVE-5325.1.patch.txt, HIVE-5325.2.patch.txt, 
 HIVE-5325.3.patch.txt, HIVE-5325-java-only.1.patch.txt, 
 HIVE-5325-java-only.2.patch.txt, HIVE-5325-java-only.3.patch.txt


 HIVE-5324 adds new interfaces that can be implemented by ORC reader/writer to 
 provide statistics. Writer provided statistics is used to update 
 table/partition level statistics in metastore. Reader provided statistics can 
 be used for reducer estimation, CBO etc. in the absence of metastore 
 statistics.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HIVE-4340) ORC should provide raw data size

2013-10-01 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-4340.


   Resolution: Fixed
Fix Version/s: 0.13.0

Both subtask HIVE-5324  HIVE-5325 are resolved. Marking this as resolved. 
Thanks, Kevin for initial patch! Thanks Prasanth for taking it to completion.

 ORC should provide raw data size
 

 Key: HIVE-4340
 URL: https://issues.apache.org/jira/browse/HIVE-4340
 Project: Hive
  Issue Type: Improvement
  Components: File Formats
Affects Versions: 0.11.0
Reporter: Kevin Wilfong
Assignee: Prasanth J
 Fix For: 0.13.0

 Attachments: HIVE-4340.1.patch.txt, HIVE-4340.2.patch.txt, 
 HIVE-4340.3.patch.txt, HIVE-4340.4.patch.txt, HIVE-4340-java-only.4.patch.txt


 ORC's SerDe currently does nothing, and hence does not calculate a raw data 
 size.  WriterImpl, however, has enough information to provide one.
 WriterImpl should compute a raw data size for each row, aggregate them per 
 stripe and record it in the strip information, as RC currently does in its 
 key header, and allow the FileSinkOperator access to the size per row.
 FileSinkOperator should be able to get the raw data size from either the 
 SerDe or the RecordWriter when the RecordWriter can provide it.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5325) Implement statistics providing ORC writer and reader interfaces

2013-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783022#comment-13783022
 ] 

Hudson commented on HIVE-5325:
--

FAILURE: Integrated in Hive-trunk-h0.21 #2370 (See 
[https://builds.apache.org/job/Hive-trunk-h0.21/2370/])
HIVE-5325 : Implement statistics providing ORC writer and reader interfaces 
(Prasanth J via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1528108)
* 
/hive/trunk/ql/src/gen/protobuf/gen-java/org/apache/hadoop/hive/ql/io/orc/OrcProto.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/BinaryColumnStatistics.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/ColumnStatisticsImpl.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcOutputFormat.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/ReaderImpl.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/StringColumnStatistics.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/WriterImpl.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/util/JavaDataModel.java
* /hive/trunk/ql/src/protobuf/org/apache/hadoop/hive/ql/io/orc/orc_proto.proto
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcFile.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcNullOptimization.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcSerDeStats.java
* /hive/trunk/ql/src/test/resources/orc-file-dump-dictionary-threshold.out
* /hive/trunk/ql/src/test/resources/orc-file-dump.out


 Implement statistics providing ORC writer and reader interfaces
 ---

 Key: HIVE-5325
 URL: https://issues.apache.org/jira/browse/HIVE-5325
 Project: Hive
  Issue Type: New Feature
Affects Versions: 0.13.0
Reporter: Prasanth J
Assignee: Prasanth J
  Labels: orcfile, statistics
 Fix For: 0.13.0

 Attachments: HIVE-5325.1.patch.txt, HIVE-5325.2.patch.txt, 
 HIVE-5325.3.patch.txt, HIVE-5325-java-only.1.patch.txt, 
 HIVE-5325-java-only.2.patch.txt, HIVE-5325-java-only.3.patch.txt


 HIVE-5324 adds new interfaces that can be implemented by ORC reader/writer to 
 provide statistics. Writer provided statistics is used to update 
 table/partition level statistics in metastore. Reader provided statistics can 
 be used for reducer estimation, CBO etc. in the absence of metastore 
 statistics.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5410) Hive command line option --auxpath still does not work post HIVE-5363

2013-10-01 Thread Brock Noland (JIRA)
Brock Noland created HIVE-5410:
--

 Summary: Hive command line option --auxpath still does not work 
post HIVE-5363
 Key: HIVE-5410
 URL: https://issues.apache.org/jira/browse/HIVE-5410
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Brock Noland
Assignee: Brock Noland
Priority: Blocker
 Fix For: 0.12.0


In short, AUX_PARAM is set to:

{noformat}
$ echo file:///etc/passwd | sed 's/:/,file:\/\//g'
file,file:/etc/passwd
{noformat}

which is invalid because file is not a real file.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5410) Hive command line option --auxpath still does not work post HIVE-5363

2013-10-01 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5410:
---

Status: Patch Available  (was: Open)

 Hive command line option --auxpath still does not work post HIVE-5363
 -

 Key: HIVE-5410
 URL: https://issues.apache.org/jira/browse/HIVE-5410
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Brock Noland
Assignee: Brock Noland
Priority: Blocker
 Fix For: 0.12.0

 Attachments: HIVE-5410.patch


 In short, AUX_PARAM is set to:
 {noformat}
 $ echo file:///etc/passwd | sed 's/:/,file:\/\//g'
 file,file:/etc/passwd
 {noformat}
 which is invalid because file is not a real file.
 NO PRECOMMIT TESTS (since this is not tested)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5410) Hive command line option --auxpath still does not work post HIVE-5363

2013-10-01 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783030#comment-13783030
 ] 

Brock Noland commented on HIVE-5410:


FYI [~thejas] as this is a blocker for 0.12.

 Hive command line option --auxpath still does not work post HIVE-5363
 -

 Key: HIVE-5410
 URL: https://issues.apache.org/jira/browse/HIVE-5410
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Brock Noland
Assignee: Brock Noland
Priority: Blocker
 Fix For: 0.12.0

 Attachments: HIVE-5410.patch


 In short, AUX_PARAM is set to:
 {noformat}
 $ echo file:///etc/passwd | sed 's/:/,file:\/\//g'
 file,file:/etc/passwd
 {noformat}
 which is invalid because file is not a real file.
 NO PRECOMMIT TESTS (since this is not tested)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5410) Hive command line option --auxpath still does not work post HIVE-5363

2013-10-01 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5410:
---

Description: 
In short, AUX_PARAM is set to:

{noformat}
$ echo file:///etc/passwd | sed 's/:/,file:\/\//g'
file,file:/etc/passwd
{noformat}

which is invalid because file is not a real file.

NO PRECOMMIT TESTS (since this is not tested)

  was:
In short, AUX_PARAM is set to:

{noformat}
$ echo file:///etc/passwd | sed 's/:/,file:\/\//g'
file,file:/etc/passwd
{noformat}

which is invalid because file is not a real file.


 Hive command line option --auxpath still does not work post HIVE-5363
 -

 Key: HIVE-5410
 URL: https://issues.apache.org/jira/browse/HIVE-5410
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Brock Noland
Assignee: Brock Noland
Priority: Blocker
 Fix For: 0.12.0

 Attachments: HIVE-5410.patch


 In short, AUX_PARAM is set to:
 {noformat}
 $ echo file:///etc/passwd | sed 's/:/,file:\/\//g'
 file,file:/etc/passwd
 {noformat}
 which is invalid because file is not a real file.
 NO PRECOMMIT TESTS (since this is not tested)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5411) Migrate serialization expression to Kryo

2013-10-01 Thread Ashutosh Chauhan (JIRA)
Ashutosh Chauhan created HIVE-5411:
--

 Summary: Migrate serialization expression to Kryo
 Key: HIVE-5411
 URL: https://issues.apache.org/jira/browse/HIVE-5411
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.13.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5411) Migrate expression serialization to Kryo

2013-10-01 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-5411:
---

Summary: Migrate expression serialization to Kryo  (was: Migrate 
serialization expression to Kryo)

 Migrate expression serialization to Kryo
 

 Key: HIVE-5411
 URL: https://issues.apache.org/jira/browse/HIVE-5411
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.13.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-5411.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5411) Migrate serialization expression to Kryo

2013-10-01 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-5411:
---

Attachment: HIVE-5411.patch

This patch is bigger than I initially though because serializeExpression 
accepted ExprNodeFuncDesc as an argument, but it should really only accept 
ExprNodeGenericFuncDesc. Doing that resulted in quite a bit of refactoring in 
various classes. Additionally, this patch also includes:
* Since serialized expression is stored in job conf, serialized version needs 
to be xml-safe, so we Base64 encode it.
* Timestamp serialization of Kryo is buggy(in the same way Date serializer is), 
patch includes custom Serializer for it.
* It incorporates HIVE-5377

 Migrate serialization expression to Kryo
 

 Key: HIVE-5411
 URL: https://issues.apache.org/jira/browse/HIVE-5411
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.13.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-5411.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


Review Request 14428: Migrate expression serialization to Kryo

2013-10-01 Thread Ashutosh Chauhan

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14428/
---

Review request for hive.


Bugs: HIVE-5411
https://issues.apache.org/jira/browse/HIVE-5411


Repository: hive


Description
---

Migrate expression serialization to Kryo


Diffs
-

  
trunk/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HBaseStorageHandler.java
 1528113 
  
trunk/hbase-handler/src/java/org/apache/hadoop/hive/hbase/HiveHBaseTableInputFormat.java
 1528113 
  
trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/ExprNodeGenericFuncEvaluator.java
 1528113 
  trunk/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java 1528113 
  trunk/ql/src/java/org/apache/hadoop/hive/ql/index/IndexPredicateAnalyzer.java 
1528113 
  trunk/ql/src/java/org/apache/hadoop/hive/ql/index/IndexSearchCondition.java 
1528113 
  
trunk/ql/src/java/org/apache/hadoop/hive/ql/index/compact/CompactIndexHandler.java
 1528113 
  trunk/ql/src/java/org/apache/hadoop/hive/ql/io/HiveInputFormat.java 1528113 
  trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java 
1528113 
  trunk/ql/src/java/org/apache/hadoop/hive/ql/io/sarg/SearchArgument.java 
1528113 
  trunk/ql/src/java/org/apache/hadoop/hive/ql/io/sarg/SearchArgumentImpl.java 
1528113 
  trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 1528113 
  
trunk/ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveStoragePredicateHandler.java
 1528113 
  
trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/index/RewriteCanApplyProcFactory.java
 1528113 
  
trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/pcr/PcrExprProcFactory.java
 1528113 
  
trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartExprEvalUtils.java
 1528113 
  
trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionExpressionForMetastore.java
 1528113 
  
trunk/ql/src/java/org/apache/hadoop/hive/ql/optimizer/ppr/PartitionPruner.java 
1528113 
  trunk/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java 
1528113 
  trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeDescUtils.java 
1528113 
  trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeGenericFuncDesc.java 
1528113 
  trunk/ql/src/java/org/apache/hadoop/hive/ql/plan/TableScanDesc.java 1528113 
  trunk/ql/src/java/org/apache/hadoop/hive/ql/ppd/ExprWalkerProcFactory.java 
1528113 
  trunk/ql/src/java/org/apache/hadoop/hive/ql/ppd/OpProcFactory.java 1528113 
  trunk/ql/src/test/org/apache/hadoop/hive/ql/exec/TestUtilities.java 1528113 
  
trunk/ql/src/test/org/apache/hadoop/hive/ql/io/sarg/TestSearchArgumentImpl.java 
1528113 
  trunk/ql/src/test/results/compiler/plan/case_sensitivity.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/cast1.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/groupby1.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/groupby2.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/groupby3.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/groupby4.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/groupby5.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/groupby6.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/input1.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/input2.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/input20.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/input3.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/input4.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/input6.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/input8.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/input9.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/input_part1.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/input_testxpath.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/input_testxpath2.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/join2.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/join4.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/join5.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/join6.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/join7.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/join8.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/sample1.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/sample2.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/sample3.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/sample4.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/sample5.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/sample6.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/sample7.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/subq.q.xml 1528113 
  trunk/ql/src/test/results/compiler/plan/udf1.q.xml 1528113 
  

[jira] [Commented] (HIVE-5411) Migrate expression serialization to Kryo

2013-10-01 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783051#comment-13783051
 ] 

Ashutosh Chauhan commented on HIVE-5411:


RB entry : https://reviews.apache.org/r/14428/

 Migrate expression serialization to Kryo
 

 Key: HIVE-5411
 URL: https://issues.apache.org/jira/browse/HIVE-5411
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.13.0
Reporter: Ashutosh Chauhan
Assignee: Ashutosh Chauhan
 Attachments: HIVE-5411.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5253) Create component to compile and jar dynamic code

2013-10-01 Thread Edward Capriolo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward Capriolo updated HIVE-5253:
--

Attachment: HIVE-5253.11.patch.txt

 Create component to compile and jar dynamic code
 

 Key: HIVE-5253
 URL: https://issues.apache.org/jira/browse/HIVE-5253
 Project: Hive
  Issue Type: Sub-task
Reporter: Edward Capriolo
Assignee: Edward Capriolo
 Attachments: HIVE-5253.10.patch.txt, HIVE-5253.11.patch.txt, 
 HIVE-5253.1.patch.txt, HIVE-5253.3.patch.txt, HIVE-5253.3.patch.txt, 
 HIVE-5253.3.patch.txt, HIVE-5253.8.patch.txt, HIVE-5253.9.patch.txt, 
 HIVE-5253.patch.txt






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5114) add a target to run tests without rebuilding them

2013-10-01 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783062#comment-13783062
 ] 

Ashutosh Chauhan commented on HIVE-5114:


+1 Useful changes to get tests to run faster.

 add a target to run tests without rebuilding them
 -

 Key: HIVE-5114
 URL: https://issues.apache.org/jira/browse/HIVE-5114
 Project: Hive
  Issue Type: Improvement
  Components: Build Infrastructure
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-5114.2.patch, HIVE-5114.D12399.1.patch


 it is sometimes annoying that each ant test ... cleans and rebuilds the 
 tests. It is should be relatively easy to add a testonly target that would 
 just run the test(s) on the existing build



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5412) HivePreparedStatement.setDate not implemented

2013-10-01 Thread Alan Gates (JIRA)
Alan Gates created HIVE-5412:


 Summary: HivePreparedStatement.setDate not implemented
 Key: HIVE-5412
 URL: https://issues.apache.org/jira/browse/HIVE-5412
 Project: Hive
  Issue Type: Bug
  Components: JDBC
Affects Versions: 0.12.0
Reporter: Alan Gates
 Fix For: 0.13.0


The DATE type was added in Hive 0.12, but the HivePreparedStatement.setDate 
method was not implemented.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Comment Edited] (HIVE-5394) ObjectInspectorConverters.getConvertedOI() does not return the correct object inspector for primitive type.

2013-10-01 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783122#comment-13783122
 ] 

Ashutosh Chauhan edited comment on HIVE-5394 at 10/1/13 5:00 PM:
-

Patch is failing to compile test classes:
{code}
compile-test:
 [echo] Project: serde
[javac] Compiling 40 source files to hive/build/serde/test/classes
[javac] 
hive/serde/src/test/org/apache/hadoop/hive/serde2/objectinspector/TestObjectInspectorConverters.java:191:
 cannot find symbol
[javac] symbol  : method 
getConvertedOI(org.apache.hadoop.hive.serde2.objectinspector.PrimitiveObjectInspector,org.apache.hadoop.hive.serde2.objectinspector.PrimitiveObjectInspector,boolean)
[javac] location: class 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorConverters
[javac] ObjectInspectorConverters.getConvertedOI(varchar10OI, 
varchar5OI, true);
[javac]  ^
{code}


was (Author: ashutoshc):
Patch is failing to compile test classes:
{java}
compile-test:
 [echo] Project: serde
[javac] Compiling 40 source files to hive/build/serde/test/classes
[javac] 
hive/serde/src/test/org/apache/hadoop/hive/serde2/objectinspector/TestObjectInspectorConverters.java:191:
 cannot find symbol
[javac] symbol  : method 
getConvertedOI(org.apache.hadoop.hive.serde2.objectinspector.PrimitiveObjectInspector,org.apache.hadoop.hive.serde2.objectinspector.PrimitiveObjectInspector,boolean)
[javac] location: class 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorConverters
[javac] ObjectInspectorConverters.getConvertedOI(varchar10OI, 
varchar5OI, true);
[javac]  ^
{java}

 ObjectInspectorConverters.getConvertedOI() does not return the correct object 
 inspector for primitive type.
 ---

 Key: HIVE-5394
 URL: https://issues.apache.org/jira/browse/HIVE-5394
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
Priority: Blocker
 Attachments: HIVE-5394.12.branch.txt, HIVE-5394.1.patch


 The code currently returns settable type of the input primitive object 
 inspector where as it should return settable type of output object inspector



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5394) ObjectInspectorConverters.getConvertedOI() does not return the correct object inspector for primitive type.

2013-10-01 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-5394:
---

Status: Open  (was: Patch Available)

Patch is failing to compile test classes:
{java}
compile-test:
 [echo] Project: serde
[javac] Compiling 40 source files to hive/build/serde/test/classes
[javac] 
hive/serde/src/test/org/apache/hadoop/hive/serde2/objectinspector/TestObjectInspectorConverters.java:191:
 cannot find symbol
[javac] symbol  : method 
getConvertedOI(org.apache.hadoop.hive.serde2.objectinspector.PrimitiveObjectInspector,org.apache.hadoop.hive.serde2.objectinspector.PrimitiveObjectInspector,boolean)
[javac] location: class 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorConverters
[javac] ObjectInspectorConverters.getConvertedOI(varchar10OI, 
varchar5OI, true);
[javac]  ^
{java}

 ObjectInspectorConverters.getConvertedOI() does not return the correct object 
 inspector for primitive type.
 ---

 Key: HIVE-5394
 URL: https://issues.apache.org/jira/browse/HIVE-5394
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
Priority: Blocker
 Attachments: HIVE-5394.12.branch.txt, HIVE-5394.1.patch


 The code currently returns settable type of the input primitive object 
 inspector where as it should return settable type of output object inspector



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5394) ObjectInspectorConverters.getConvertedOI() does not return the correct object inspector for primitive type.

2013-10-01 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783127#comment-13783127
 ] 

Ashutosh Chauhan commented on HIVE-5394:


This is for trunk patch.

 ObjectInspectorConverters.getConvertedOI() does not return the correct object 
 inspector for primitive type.
 ---

 Key: HIVE-5394
 URL: https://issues.apache.org/jira/browse/HIVE-5394
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
Priority: Blocker
 Attachments: HIVE-5394.12.branch.txt, HIVE-5394.1.patch


 The code currently returns settable type of the input primitive object 
 inspector where as it should return settable type of output object inspector



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5394) ObjectInspectorConverters.getConvertedOI() does not return the correct object inspector for primitive type.

2013-10-01 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-5394:


Attachment: HIVE-5394.2.patch.txt

 ObjectInspectorConverters.getConvertedOI() does not return the correct object 
 inspector for primitive type.
 ---

 Key: HIVE-5394
 URL: https://issues.apache.org/jira/browse/HIVE-5394
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
Priority: Blocker
 Attachments: HIVE-5394.12.branch.txt, HIVE-5394.1.patch, 
 HIVE-5394.2.patch.txt


 The code currently returns settable type of the input primitive object 
 inspector where as it should return settable type of output object inspector



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5394) ObjectInspectorConverters.getConvertedOI() does not return the correct object inspector for primitive type.

2013-10-01 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-5394:


Status: Patch Available  (was: Open)

 ObjectInspectorConverters.getConvertedOI() does not return the correct object 
 inspector for primitive type.
 ---

 Key: HIVE-5394
 URL: https://issues.apache.org/jira/browse/HIVE-5394
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
Priority: Blocker
 Attachments: HIVE-5394.12.branch.txt, HIVE-5394.1.patch, 
 HIVE-5394.2.patch.txt


 The code currently returns settable type of the input primitive object 
 inspector where as it should return settable type of output object inspector



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5253) Create component to compile and jar dynamic code

2013-10-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783158#comment-13783158
 ] 

Hive QA commented on HIVE-5253:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12606118/HIVE-5253.11.patch.txt

{color:green}SUCCESS:{color} +1 4078 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/982/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/982/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

 Create component to compile and jar dynamic code
 

 Key: HIVE-5253
 URL: https://issues.apache.org/jira/browse/HIVE-5253
 Project: Hive
  Issue Type: Sub-task
Reporter: Edward Capriolo
Assignee: Edward Capriolo
 Attachments: HIVE-5253.10.patch.txt, HIVE-5253.11.patch.txt, 
 HIVE-5253.1.patch.txt, HIVE-5253.3.patch.txt, HIVE-5253.3.patch.txt, 
 HIVE-5253.3.patch.txt, HIVE-5253.8.patch.txt, HIVE-5253.9.patch.txt, 
 HIVE-5253.patch.txt






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5405) Need to implement PersistenceDelegate for org.antlr.runtime.CommonToken

2013-10-01 Thread shanyu zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783167#comment-13783167
 ] 

shanyu zhao commented on HIVE-5405:
---

[~brocknoland] Yes, this patch fixed my issues if I set 
hive.plan.serialization.format to javaXML. I believe this is the real cause for 
HIVE-5068, which is fixed by HIVE-5263 because HIVE-5263 uses Kryo serializer 
instead of javaXML.

 Need to implement PersistenceDelegate for org.antlr.runtime.CommonToken
 ---

 Key: HIVE-5405
 URL: https://issues.apache.org/jira/browse/HIVE-5405
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: shanyu zhao
Assignee: shanyu zhao
 Attachments: HIVE-5405.patch


 Prior to HIVE-1511, running hive join operation results in the following 
 exception:
 java.lang.RuntimeException: Cannot serialize object
 at 
 org.apache.hadoop.hive.ql.exec.Utilities$1.exceptionThrown(Utilities.
 java:639)
 at java.beans.XMLEncoder.writeStatement(XMLEncoder.java:426)
 ...
 Caused by: java.lang.InstantiationException: org.antlr.runtime.CommonToken
 at java.lang.Class.newInstance0(Class.java:357)
 at java.lang.Class.newInstance(Class.java:325)
 at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
 sorImpl.java:43)
 HIVE-1511 introduced a new (and set to default) hive plan serialization 
 format Kryo, which fixed this problem by implementing the Kryo serializer for 
 CommonToken. However, if we set the following in configuration file:
 property
 namehive.plan.serialization.format/name
 valuejavaXML/value
 /property
 We'll see the same failure as before. We need to implement a 
 PersistenceDelegate for the situation when javaXML is set to serialization 
 format.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4160) Vectorized Query Execution in Hive

2013-10-01 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783171#comment-13783171
 ] 

Jitendra Nath Pandey commented on HIVE-4160:


Vectorization work has been committed to trunk. Going forward, all the 
vectorization work will happen on trunk and vectorization branch will be 
obsolete.

 Vectorized Query Execution in Hive
 --

 Key: HIVE-4160
 URL: https://issues.apache.org/jira/browse/HIVE-4160
 Project: Hive
  Issue Type: New Feature
Reporter: Jitendra Nath Pandey
Assignee: Jitendra Nath Pandey
 Attachments: Hive-Vectorized-Query-Execution-Design.docx, 
 Hive-Vectorized-Query-Execution-Design-rev10.docx, 
 Hive-Vectorized-Query-Execution-Design-rev10.docx, 
 Hive-Vectorized-Query-Execution-Design-rev10.pdf, 
 Hive-Vectorized-Query-Execution-Design-rev11.docx, 
 Hive-Vectorized-Query-Execution-Design-rev11.pdf, 
 Hive-Vectorized-Query-Execution-Design-rev2.docx, 
 Hive-Vectorized-Query-Execution-Design-rev3.docx, 
 Hive-Vectorized-Query-Execution-Design-rev3.docx, 
 Hive-Vectorized-Query-Execution-Design-rev3.pdf, 
 Hive-Vectorized-Query-Execution-Design-rev4.docx, 
 Hive-Vectorized-Query-Execution-Design-rev4.pdf, 
 Hive-Vectorized-Query-Execution-Design-rev5.docx, 
 Hive-Vectorized-Query-Execution-Design-rev5.pdf, 
 Hive-Vectorized-Query-Execution-Design-rev6.docx, 
 Hive-Vectorized-Query-Execution-Design-rev6.pdf, 
 Hive-Vectorized-Query-Execution-Design-rev7.docx, 
 Hive-Vectorized-Query-Execution-Design-rev8.docx, 
 Hive-Vectorized-Query-Execution-Design-rev8.pdf, 
 Hive-Vectorized-Query-Execution-Design-rev9.docx, 
 Hive-Vectorized-Query-Execution-Design-rev9.pdf


 The Hive query execution engine currently processes one row at a time. A 
 single row of data goes through all the operators before the next row can be 
 processed. This mode of processing is very inefficient in terms of CPU usage. 
 Research has demonstrated that this yields very low instructions per cycle 
 [MonetDB X100]. Also currently Hive heavily relies on lazy deserialization 
 and data columns go through a layer of object inspectors that identify column 
 type, deserialize data and determine appropriate expression routines in the 
 inner loop. These layers of virtual method calls further slow down the 
 processing. 
 This work will add support for vectorized query execution to Hive, where, 
 instead of individual rows, batches of about a thousand rows at a time are 
 processed. Each column in the batch is represented as a vector of a primitive 
 data type. The inner loop of execution scans these vectors very fast, 
 avoiding method calls, deserialization, unnecessary if-then-else, etc. This 
 substantially reduces CPU time used, and gives excellent instructions per 
 cycle (i.e. improved processor pipeline utilization). See the attached design 
 specification for more details.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-4501) HS2 memory leak - FileSystem objects in FileSystem.CACHE

2013-10-01 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-4501:


Status: Open  (was: Patch Available)

 HS2 memory leak - FileSystem objects in FileSystem.CACHE
 

 Key: HIVE-4501
 URL: https://issues.apache.org/jira/browse/HIVE-4501
 Project: Hive
  Issue Type: Bug
  Components: HiveServer2
Affects Versions: 0.11.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair
Priority: Critical
 Attachments: HIVE-4501.1.patch, HIVE-4501.1.patch, HIVE-4501.1.patch, 
 HIVE-4501.trunk.patch


 org.apache.hadoop.fs.FileSystem objects are getting accumulated in 
 FileSystem.CACHE, with HS2 in unsecure mode.
 As a workaround, it is possible to set fs.hdfs.impl.disable.cache and 
 fs.file.impl.disable.cache to true.
 Users should not have to bother with this extra configuration. 
 As a workaround disable impersonation by setting hive.server2.enable.doAs to 
 false.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5405) Need to implement PersistenceDelegate for org.antlr.runtime.CommonToken

2013-10-01 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783195#comment-13783195
 ] 

Brock Noland commented on HIVE-5405:


RIght, in HIVE-5263 we didn't perform this update so 
hive.plan.serialization.format=javaXML won't actually work.  One more question, 
I am just curious as to why you are setting hive.plan.serialization.format to 
javaXML? It's useful in our tests but in general it should be a unperformant 
configuration choice.

 Need to implement PersistenceDelegate for org.antlr.runtime.CommonToken
 ---

 Key: HIVE-5405
 URL: https://issues.apache.org/jira/browse/HIVE-5405
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: shanyu zhao
Assignee: shanyu zhao
 Attachments: HIVE-5405.patch


 Prior to HIVE-1511, running hive join operation results in the following 
 exception:
 java.lang.RuntimeException: Cannot serialize object
 at 
 org.apache.hadoop.hive.ql.exec.Utilities$1.exceptionThrown(Utilities.
 java:639)
 at java.beans.XMLEncoder.writeStatement(XMLEncoder.java:426)
 ...
 Caused by: java.lang.InstantiationException: org.antlr.runtime.CommonToken
 at java.lang.Class.newInstance0(Class.java:357)
 at java.lang.Class.newInstance(Class.java:325)
 at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
 sorImpl.java:43)
 HIVE-1511 introduced a new (and set to default) hive plan serialization 
 format Kryo, which fixed this problem by implementing the Kryo serializer for 
 CommonToken. However, if we set the following in configuration file:
 property
 namehive.plan.serialization.format/name
 valuejavaXML/value
 /property
 We'll see the same failure as before. We need to implement a 
 PersistenceDelegate for the situation when javaXML is set to serialization 
 format.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5405) Need to implement PersistenceDelegate for org.antlr.runtime.CommonToken

2013-10-01 Thread shanyu zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783205#comment-13783205
 ] 

shanyu zhao commented on HIVE-5405:
---

I understand that javaXML is not a preferred configuration choice. But since we 
are providing a configuration option instead of removing JavaXML support in 
HIVE-1511 we need to make sure they actually work.

 Need to implement PersistenceDelegate for org.antlr.runtime.CommonToken
 ---

 Key: HIVE-5405
 URL: https://issues.apache.org/jira/browse/HIVE-5405
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: shanyu zhao
Assignee: shanyu zhao
 Attachments: HIVE-5405.patch


 Prior to HIVE-1511, running hive join operation results in the following 
 exception:
 java.lang.RuntimeException: Cannot serialize object
 at 
 org.apache.hadoop.hive.ql.exec.Utilities$1.exceptionThrown(Utilities.
 java:639)
 at java.beans.XMLEncoder.writeStatement(XMLEncoder.java:426)
 ...
 Caused by: java.lang.InstantiationException: org.antlr.runtime.CommonToken
 at java.lang.Class.newInstance0(Class.java:357)
 at java.lang.Class.newInstance(Class.java:325)
 at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
 sorImpl.java:43)
 HIVE-1511 introduced a new (and set to default) hive plan serialization 
 format Kryo, which fixed this problem by implementing the Kryo serializer for 
 CommonToken. However, if we set the following in configuration file:
 property
 namehive.plan.serialization.format/name
 valuejavaXML/value
 /property
 We'll see the same failure as before. We need to implement a 
 PersistenceDelegate for the situation when javaXML is set to serialization 
 format.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5405) Need to implement PersistenceDelegate for org.antlr.runtime.CommonToken

2013-10-01 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783206#comment-13783206
 ] 

Brock Noland commented on HIVE-5405:


Hey,

Sorry I wasn't clear, I am in favor of this patch (I am +1 on the patch) and 
feel we should have done this in HIVE-5263 (RIght, in HIVE-5263 we didn't 
perform this update so hive.plan.serialization.format=javaXML won't actually 
work.), but I want to understand why you want to do this so I can understand 
what scenarios javaXML is preferable.

Looking forward to your response.

 Need to implement PersistenceDelegate for org.antlr.runtime.CommonToken
 ---

 Key: HIVE-5405
 URL: https://issues.apache.org/jira/browse/HIVE-5405
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: shanyu zhao
Assignee: shanyu zhao
 Attachments: HIVE-5405.patch


 Prior to HIVE-1511, running hive join operation results in the following 
 exception:
 java.lang.RuntimeException: Cannot serialize object
 at 
 org.apache.hadoop.hive.ql.exec.Utilities$1.exceptionThrown(Utilities.
 java:639)
 at java.beans.XMLEncoder.writeStatement(XMLEncoder.java:426)
 ...
 Caused by: java.lang.InstantiationException: org.antlr.runtime.CommonToken
 at java.lang.Class.newInstance0(Class.java:357)
 at java.lang.Class.newInstance(Class.java:325)
 at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
 sorImpl.java:43)
 HIVE-1511 introduced a new (and set to default) hive plan serialization 
 format Kryo, which fixed this problem by implementing the Kryo serializer for 
 CommonToken. However, if we set the following in configuration file:
 property
 namehive.plan.serialization.format/name
 valuejavaXML/value
 /property
 We'll see the same failure as before. We need to implement a 
 PersistenceDelegate for the situation when javaXML is set to serialization 
 format.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5405) Need to implement PersistenceDelegate for org.antlr.runtime.CommonToken

2013-10-01 Thread shanyu zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783212#comment-13783212
 ] 

shanyu zhao commented on HIVE-5405:
---

Oh sorry I couldn't answer your question. I was just chasing a bug similar to 
HIVE-5068 before HIVE-5263 and concluded CommonToken was the problem. That's 
why I did this patch. I don't actually know when javaXML is preferable in 
reality.

 Need to implement PersistenceDelegate for org.antlr.runtime.CommonToken
 ---

 Key: HIVE-5405
 URL: https://issues.apache.org/jira/browse/HIVE-5405
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: shanyu zhao
Assignee: shanyu zhao
 Attachments: HIVE-5405.patch


 Prior to HIVE-1511, running hive join operation results in the following 
 exception:
 java.lang.RuntimeException: Cannot serialize object
 at 
 org.apache.hadoop.hive.ql.exec.Utilities$1.exceptionThrown(Utilities.
 java:639)
 at java.beans.XMLEncoder.writeStatement(XMLEncoder.java:426)
 ...
 Caused by: java.lang.InstantiationException: org.antlr.runtime.CommonToken
 at java.lang.Class.newInstance0(Class.java:357)
 at java.lang.Class.newInstance(Class.java:325)
 at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
 sorImpl.java:43)
 HIVE-1511 introduced a new (and set to default) hive plan serialization 
 format Kryo, which fixed this problem by implementing the Kryo serializer for 
 CommonToken. However, if we set the following in configuration file:
 property
 namehive.plan.serialization.format/name
 valuejavaXML/value
 /property
 We'll see the same failure as before. We need to implement a 
 PersistenceDelegate for the situation when javaXML is set to serialization 
 format.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5405) Need to implement PersistenceDelegate for org.antlr.runtime.CommonToken

2013-10-01 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783221#comment-13783221
 ] 

Brock Noland commented on HIVE-5405:


+1

 Need to implement PersistenceDelegate for org.antlr.runtime.CommonToken
 ---

 Key: HIVE-5405
 URL: https://issues.apache.org/jira/browse/HIVE-5405
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: shanyu zhao
Assignee: shanyu zhao
 Attachments: HIVE-5405.patch


 Prior to HIVE-1511, running hive join operation results in the following 
 exception:
 java.lang.RuntimeException: Cannot serialize object
 at 
 org.apache.hadoop.hive.ql.exec.Utilities$1.exceptionThrown(Utilities.
 java:639)
 at java.beans.XMLEncoder.writeStatement(XMLEncoder.java:426)
 ...
 Caused by: java.lang.InstantiationException: org.antlr.runtime.CommonToken
 at java.lang.Class.newInstance0(Class.java:357)
 at java.lang.Class.newInstance(Class.java:325)
 at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
 sorImpl.java:43)
 HIVE-1511 introduced a new (and set to default) hive plan serialization 
 format Kryo, which fixed this problem by implementing the Kryo serializer for 
 CommonToken. However, if we set the following in configuration file:
 property
 namehive.plan.serialization.format/name
 valuejavaXML/value
 /property
 We'll see the same failure as before. We need to implement a 
 PersistenceDelegate for the situation when javaXML is set to serialization 
 format.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5325) Implement statistics providing ORC writer and reader interfaces

2013-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783251#comment-13783251
 ] 

Hudson commented on HIVE-5325:
--

FAILURE: Integrated in Hive-trunk-hadoop2-ptest #122 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2-ptest/122/])
HIVE-5325 : Implement statistics providing ORC writer and reader interfaces 
(Prasanth J via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1528108)
* 
/hive/trunk/ql/src/gen/protobuf/gen-java/org/apache/hadoop/hive/ql/io/orc/OrcProto.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/BinaryColumnStatistics.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/ColumnStatisticsImpl.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcOutputFormat.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/ReaderImpl.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/StringColumnStatistics.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/WriterImpl.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/util/JavaDataModel.java
* /hive/trunk/ql/src/protobuf/org/apache/hadoop/hive/ql/io/orc/orc_proto.proto
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcFile.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcNullOptimization.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcSerDeStats.java
* /hive/trunk/ql/src/test/resources/orc-file-dump-dictionary-threshold.out
* /hive/trunk/ql/src/test/resources/orc-file-dump.out


 Implement statistics providing ORC writer and reader interfaces
 ---

 Key: HIVE-5325
 URL: https://issues.apache.org/jira/browse/HIVE-5325
 Project: Hive
  Issue Type: New Feature
Affects Versions: 0.13.0
Reporter: Prasanth J
Assignee: Prasanth J
  Labels: orcfile, statistics
 Fix For: 0.13.0

 Attachments: HIVE-5325.1.patch.txt, HIVE-5325.2.patch.txt, 
 HIVE-5325.3.patch.txt, HIVE-5325-java-only.1.patch.txt, 
 HIVE-5325-java-only.2.patch.txt, HIVE-5325-java-only.3.patch.txt


 HIVE-5324 adds new interfaces that can be implemented by ORC reader/writer to 
 provide statistics. Writer provided statistics is used to update 
 table/partition level statistics in metastore. Reader provided statistics can 
 be used for reducer estimation, CBO etc. in the absence of metastore 
 statistics.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5402) StorageBasedAuthorizationProvider is not correctly able to determine that it is running from client-side

2013-10-01 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-5402:
---

Status: Open  (was: Patch Available)

Fair point - SBAP initially did not have any tests because it was difficult to 
test without multiple users when running from the metastore. Now that it's been 
made to run from the client side, I should add some tests to it. I'll update it 
with some.

 StorageBasedAuthorizationProvider is not correctly able to determine that it 
 is running from client-side
 

 Key: HIVE-5402
 URL: https://issues.apache.org/jira/browse/HIVE-5402
 Project: Hive
  Issue Type: Bug
  Components: Authorization
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Attachments: HIVE-5402.patch


 HIVE-5048 tried to change the StorageBasedAuthorizationProvider (SBAP) so 
 that it could be run from the client side as well.
 However, there is a bug that causes SBAP to incorrectly conclude that it's 
 running from the metastore-side when it's actually running from the 
 client-side that causes it to throw a IllegalStateException claiming the 
 warehouse variable isn't set.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5410) Hive command line option --auxpath still does not work post HIVE-5363

2013-10-01 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783273#comment-13783273
 ] 

Thejas M Nair commented on HIVE-5410:
-

+1

 Hive command line option --auxpath still does not work post HIVE-5363
 -

 Key: HIVE-5410
 URL: https://issues.apache.org/jira/browse/HIVE-5410
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Brock Noland
Assignee: Brock Noland
Priority: Blocker
 Fix For: 0.12.0

 Attachments: HIVE-5410.patch


 In short, AUX_PARAM is set to:
 {noformat}
 $ echo file:///etc/passwd | sed 's/:/,file:\/\//g'
 file,file:/etc/passwd
 {noformat}
 which is invalid because file is not a real file.
 NO PRECOMMIT TESTS (since this is not tested)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5362) TestHCatHBaseInputFormat has a bug which will not allow it to run on JDK7 and RHEL 6

2013-10-01 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-5362:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

(jira was still marked unresolved though it is, marking as fixed)

 TestHCatHBaseInputFormat has a bug which will not allow it to run on JDK7 and 
 RHEL 6
 

 Key: HIVE-5362
 URL: https://issues.apache.org/jira/browse/HIVE-5362
 Project: Hive
  Issue Type: Bug
  Components: HCatalog, StorageHandler
Affects Versions: 0.12.0
Reporter: Viraj Bhat
Assignee: Viraj Bhat
 Fix For: 0.12.0

 Attachments: HIVE-5362.patch, HIVE5362.patch


 Testcases  TestHBaseInputFormatProjectionReadMR and 
 TestHBaseTableProjectionReadMR use different Map class but check the same 
 static variable. So if the order of the tests in which they are run changes, 
 the testcase fails. I experienced this on RHEL 6.4 with JDK 7 and with no 
 other combination. This succeeds currently on all build machines as the order 
 of the tests are deterministic.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5382) Allow strings represented as exponential notation to be typecasted to int/smallint/bigint/tinyint

2013-10-01 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-5382:


Status: Patch Available  (was: Open)

 Allow strings represented as exponential notation to be typecasted to 
 int/smallint/bigint/tinyint
 -

 Key: HIVE-5382
 URL: https://issues.apache.org/jira/browse/HIVE-5382
 Project: Hive
  Issue Type: Bug
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Attachments: HIVE-5382.1.patch


 Follow up jira for HIVE-5352



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5382) Allow strings represented as exponential notation to be typecasted to int/smallint/bigint/tinyint

2013-10-01 Thread Hari Sankar Sivarama Subramaniyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sankar Sivarama Subramaniyan updated HIVE-5382:


Attachment: HIVE-5382.1.patch

Allow exponential notation when the final result is within bounds.

 Allow strings represented as exponential notation to be typecasted to 
 int/smallint/bigint/tinyint
 -

 Key: HIVE-5382
 URL: https://issues.apache.org/jira/browse/HIVE-5382
 Project: Hive
  Issue Type: Bug
Reporter: Hari Sankar Sivarama Subramaniyan
Assignee: Hari Sankar Sivarama Subramaniyan
 Attachments: HIVE-5382.1.patch


 Follow up jira for HIVE-5352



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5400) Allow admins to disable compile and other commands

2013-10-01 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5400:
---

Attachment: HIVE-5400.patch

Attached is a patch which implements this for HS2 only. I think that makes 
sense as Admins will not be able to stop Hive CLI users from bypassing this 
mechanism anyway.

 Allow admins to disable compile and other commands
 --

 Key: HIVE-5400
 URL: https://issues.apache.org/jira/browse/HIVE-5400
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Edward Capriolo
 Attachments: HIVE-5400.patch


 From here: 
 https://issues.apache.org/jira/browse/HIVE-5253?focusedCommentId=13782220page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13782220
  I think we should afford admins who want to disable this functionality the 
 ability to do so. Since such admins might want to disable other commands such 
 as add or dfs, it wouldn't be much trouble to allow them to do this as well. 
 For example we could have a configuration option hive.available.commands 
 (or similar) which specified add,set,delete,reset, etc by default. Then check 
 this value in CommandProcessorFactory. It would probably make sense to add 
 this property to the restrict list.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5400) Allow admins to disable compile and other commands

2013-10-01 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5400:
---

Status: Patch Available  (was: Open)

Marking PA for HiveQA.

 Allow admins to disable compile and other commands
 --

 Key: HIVE-5400
 URL: https://issues.apache.org/jira/browse/HIVE-5400
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Edward Capriolo
 Attachments: HIVE-5400.patch


 From here: 
 https://issues.apache.org/jira/browse/HIVE-5253?focusedCommentId=13782220page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13782220
  I think we should afford admins who want to disable this functionality the 
 ability to do so. Since such admins might want to disable other commands such 
 as add or dfs, it wouldn't be much trouble to allow them to do this as well. 
 For example we could have a configuration option hive.available.commands 
 (or similar) which specified add,set,delete,reset, etc by default. Then check 
 this value in CommandProcessorFactory. It would probably make sense to add 
 this property to the restrict list.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: did you always have to log in to phabricator

2013-10-01 Thread Edward Capriolo
I do not know what to say on this, other then we are stuck between a rock
and a hard place. I would say we should just stop using fabricator all
together, but then again we already have two tickets opened with
ASF-infrastructure not going anywhere for months (moving our site to a CMS,
fixing the broken confluence to wiki publishing) so moving the process
officially to some ASF review board type thing might have similar issues.

What we might have to do is host phabricator ourselves like we are hosting
our unit testing ourselves. I do not know what else to say on this.


On Sat, Sep 28, 2013 at 8:55 PM, Sean Busbey bus...@cloudera.com wrote:

 Bump. Any update on this?


 On Tue, Sep 17, 2013 at 12:41 PM, Edward Capriolo edlinuxg...@gmail.com
 wrote:

  I do not like this. It is inconvenience when using a mobile device, but
  more importantly it does not seem very transparent to our end users. For
  example, a user is browsing jira they may want to review the code only on
  review board (not yet attached to the issue), they should not be forced
 to
  sign up to help in the process.
 
  Would anyone from facebook care to chime in here? I think we all like
  fabricator for the most part. Our docs suggest this fabricator is our
  de-facto review system. As an ASF project I do not think requiring a
 login
  on some external service even to review a jira is correct.
 
 
  On Tue, Sep 17, 2013 at 12:27 PM, Xuefu Zhang xzh...@cloudera.com
 wrote:
 
   Yeah. I used to be able to view w/o login, but now I am not.
  
  
   On Tue, Sep 17, 2013 at 7:27 AM, Brock Noland br...@cloudera.com
  wrote:
  
Personally I prefer Review Board.
   
On Tue, Sep 17, 2013 at 8:31 AM, Edward Capriolo 
  edlinuxg...@gmail.com
wrote:
 I never remeber having to log into phabricator to view a patch. Has
   this
 changed recently? I believe that having to create an external
 account
   to
 view a patch in progress is not something we should be doing.
   
   
   
--
Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org
   
  
 



 --
 Sean



Re: did you always have to log in to phabricator

2013-10-01 Thread Sean Busbey
Well, the ASF review board does currently work. Our dev guide suggests that
either it or fabricator are fine. Generally all of my own jira submissions
include a RB, so I'm fairly confident it works (for the features that RB
has generally).

I presume something as strong as a statement of we're going to stop using
fabrictor as a project would require a PMC vote?

-Sean


On Tue, Oct 1, 2013 at 1:15 PM, Edward Capriolo edlinuxg...@gmail.comwrote:

 I do not know what to say on this, other then we are stuck between a rock
 and a hard place. I would say we should just stop using fabricator all
 together, but then again we already have two tickets opened with
 ASF-infrastructure not going anywhere for months (moving our site to a CMS,
 fixing the broken confluence to wiki publishing) so moving the process
 officially to some ASF review board type thing might have similar issues.

 What we might have to do is host phabricator ourselves like we are hosting
 our unit testing ourselves. I do not know what else to say on this.


 On Sat, Sep 28, 2013 at 8:55 PM, Sean Busbey bus...@cloudera.com wrote:

  Bump. Any update on this?
 
 
  On Tue, Sep 17, 2013 at 12:41 PM, Edward Capriolo edlinuxg...@gmail.com
  wrote:
 
   I do not like this. It is inconvenience when using a mobile device, but
   more importantly it does not seem very transparent to our end users.
 For
   example, a user is browsing jira they may want to review the code only
 on
   review board (not yet attached to the issue), they should not be forced
  to
   sign up to help in the process.
  
   Would anyone from facebook care to chime in here? I think we all like
   fabricator for the most part. Our docs suggest this fabricator is our
   de-facto review system. As an ASF project I do not think requiring a
  login
   on some external service even to review a jira is correct.
  
  
   On Tue, Sep 17, 2013 at 12:27 PM, Xuefu Zhang xzh...@cloudera.com
  wrote:
  
Yeah. I used to be able to view w/o login, but now I am not.
   
   
On Tue, Sep 17, 2013 at 7:27 AM, Brock Noland br...@cloudera.com
   wrote:
   
 Personally I prefer Review Board.

 On Tue, Sep 17, 2013 at 8:31 AM, Edward Capriolo 
   edlinuxg...@gmail.com
 wrote:
  I never remeber having to log into phabricator to view a patch.
 Has
this
  changed recently? I believe that having to create an external
  account
to
  view a patch in progress is not something we should be doing.



 --
 Apache MRUnit - Unit testing MapReduce - http://mrunit.apache.org

   
  
 
 
 
  --
  Sean
 




-- 
Sean


[jira] [Updated] (HIVE-5196) ThriftCLIService.java uses stderr to print the stack trace, it should use the logger instead.

2013-10-01 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5196:


Attachment: HIVE-5196.3.patch

HIVE-5196.3.patch - patch with minor rebase for trunk. Can you check if it 
looks good ?


 ThriftCLIService.java uses stderr to print the stack trace, it should use the 
 logger instead.
 -

 Key: HIVE-5196
 URL: https://issues.apache.org/jira/browse/HIVE-5196
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.12.0

 Attachments: D13107.1.patch, HIVE-5196.3.patch, 
 HIVE-5196.D13107.1.patch, HIVE-5196.D13107.2.patch


 ThriftCLIService.java uses stderr to print the stack trace, it should use the 
 logger instead. Using e.printStackTrace is not suitable for production.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5196) ThriftCLIService.java uses stderr to print the stack trace, it should use the logger instead.

2013-10-01 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5196:


Status: Patch Available  (was: Open)

 ThriftCLIService.java uses stderr to print the stack trace, it should use the 
 logger instead.
 -

 Key: HIVE-5196
 URL: https://issues.apache.org/jira/browse/HIVE-5196
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.12.0

 Attachments: D13107.1.patch, HIVE-5196.3.patch, 
 HIVE-5196.D13107.1.patch, HIVE-5196.D13107.2.patch


 ThriftCLIService.java uses stderr to print the stack trace, it should use the 
 logger instead. Using e.printStackTrace is not suitable for production.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5196) ThriftCLIService.java uses stderr to print the stack trace, it should use the logger instead.

2013-10-01 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5196:


Status: Open  (was: Patch Available)

 ThriftCLIService.java uses stderr to print the stack trace, it should use the 
 logger instead.
 -

 Key: HIVE-5196
 URL: https://issues.apache.org/jira/browse/HIVE-5196
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: Vaibhav Gumashta
Assignee: Vaibhav Gumashta
 Fix For: 0.12.0

 Attachments: D13107.1.patch, HIVE-5196.3.patch, 
 HIVE-5196.D13107.1.patch, HIVE-5196.D13107.2.patch


 ThriftCLIService.java uses stderr to print the stack trace, it should use the 
 logger instead. Using e.printStackTrace is not suitable for production.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5403) Move loading of filesystem, ugi, metastore client to hive session

2013-10-01 Thread Vikram Dixit K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Dixit K updated HIVE-5403:
-

Attachment: HIVE-5403.2.patch

Updated to address Gunther's comments.

 Move loading of filesystem, ugi, metastore client to hive session
 -

 Key: HIVE-5403
 URL: https://issues.apache.org/jira/browse/HIVE-5403
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Attachments: HIVE-5403.1.patch, HIVE-5403.2.patch


 As part of HIVE-5184, the metastore connection, loading filesystem were done 
 as part of the tez session so as to speed up query times while paying a cost 
 at startup. We can do this more generally in hive to apply to both the 
 mapreduce and tez side of things.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5403) Move loading of filesystem, ugi, metastore client to hive session

2013-10-01 Thread Vikram Dixit K (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vikram Dixit K updated HIVE-5403:
-

Status: Open  (was: Patch Available)

 Move loading of filesystem, ugi, metastore client to hive session
 -

 Key: HIVE-5403
 URL: https://issues.apache.org/jira/browse/HIVE-5403
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Attachments: HIVE-5403.1.patch, HIVE-5403.2.patch


 As part of HIVE-5184, the metastore connection, loading filesystem were done 
 as part of the tez session so as to speed up query times while paying a cost 
 at startup. We can do this more generally in hive to apply to both the 
 mapreduce and tez side of things.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: Review Request 14425: HIVE-5403: Move loading of filesystem, ugi, metastore client to hive session

2013-10-01 Thread Vikram Dixit Kumaraswamy

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14425/
---

(Updated Oct. 1, 2013, 9:04 p.m.)


Review request for hive and Gunther Hagleitner.


Changes
---

Addressed Gunther's comments.


Bugs: HIVE-5403
https://issues.apache.org/jira/browse/HIVE-5403


Repository: hive-git


Description
---

Move loading of filesystem, ugi, metastore client to hive session.


Diffs (updated)
-

  ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java 0491f8b 
  ql/src/java/org/apache/hadoop/hive/ql/session/SessionState.java f6b1491 

Diff: https://reviews.apache.org/r/14425/diff/


Testing
---

Does not affect any unit tests but all of them exercise this code path.


Thanks,

Vikram Dixit Kumaraswamy



[jira] [Updated] (HIVE-4957) Restrict number of bit vectors, to prevent out of Java heap memory

2013-10-01 Thread Shreepadma Venugopalan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shreepadma Venugopalan updated HIVE-4957:
-

Attachment: HIVE-4957.2.patch

 Restrict number of bit vectors, to prevent out of Java heap memory
 --

 Key: HIVE-4957
 URL: https://issues.apache.org/jira/browse/HIVE-4957
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.11.0
Reporter: Brock Noland
Assignee: Shreepadma Venugopalan
 Attachments: HIVE-4957.1.patch, HIVE-4957.2.patch


 normally increase number of bit vectors will increase calculation accuracy. 
 Let's say
 {noformat}
 select compute_stats(a, 40) from test_hive;
 {noformat}
 generally get better accuracy than
 {noformat}
 select compute_stats(a, 16) from test_hive;
 {noformat}
 But larger number of bit vectors also cause query run slower. When number of 
 bit vectors over 50, it won't help to increase accuracy anymore. But it still 
 increase memory usage, and crash Hive if number if too huge. Current Hive 
 doesn't prevent user use ridiculous large number of bit vectors in 
 'compute_stats' query.
 One example
 {noformat}
 select compute_stats(a, 9) from column_eight_types;
 {noformat}
 crashes Hive.
 {noformat}
 2012-12-20 23:21:52,247 Stage-1 map = 0%,  reduce = 0%
 2012-12-20 23:22:11,315 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.29 
 sec
 MapReduce Total cumulative CPU time: 290 msec
 Ended Job = job_1354923204155_0777 with errors
 Error during job, obtaining debugging information...
 Job Tracking URL: 
 http://cs-10-20-81-171.cloud.cloudera.com:8088/proxy/application_1354923204155_0777/
 Examining task ID: task_1354923204155_0777_m_00 (and more) from job 
 job_1354923204155_0777
 Task with the most failures(4): 
 -
 Task ID:
   task_1354923204155_0777_m_00
 URL:
   
 http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1354923204155_0777tipid=task_1354923204155_0777_m_00
 -
 Diagnostic Messages for this Task:
 Error: Java heap space
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5302) PartitionPruner fails on Avro non-partitioned data

2013-10-01 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783359#comment-13783359
 ] 

Thejas M Nair commented on HIVE-5302:
-

Changing the priority to critical instead of blocker.


 PartitionPruner fails on Avro non-partitioned data
 --

 Key: HIVE-5302
 URL: https://issues.apache.org/jira/browse/HIVE-5302
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.11.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Critical
  Labels: avro
 Attachments: HIVE-5302.1-branch-0.12.patch.txt, 
 HIVE-5302.1.patch.txt, HIVE-5302.1.patch.txt


 While updating HIVE-3585 I found a test case that causes the failure in the 
 MetaStoreUtils partition retrieval from back in HIVE-4789.
 in this case, the failure is triggered when the partition pruner is handed a 
 non-partitioned table and has to construct a pseudo-partition.
 e.g.
 {code}
   INSERT OVERWRITE TABLE partitioned_table PARTITION(col) SELECT id, foo, col 
 FROM non_partitioned_table WHERE col = 9;
 {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: Review Request 14250: HIVE-4957: Restrict number of bit vectors, to prevent out of Java heap memory

2013-10-01 Thread Shreepadma Venugopalan


 On Sept. 20, 2013, 8:38 p.m., Carl Steinbach wrote:
  ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFComputeStats.java,
   line 480
  https://reviews.apache.org/r/14250/diff/2/?file=354856#file354856line480
 
  Please use a static variable instead of repeating 1024:
  
  private static final MAX_NUM_BIT_VECTORS = 1024;

1024 is not repeated within a class. However, we can repeat the constant 
declaration across classes instead.


 On Sept. 20, 2013, 8:38 p.m., Carl Steinbach wrote:
  ql/src/test/queries/clientnegative/compute_stats_long.q, line 6
  https://reviews.apache.org/r/14250/diff/2/?file=354857#file354857line6
 
  Why should it raise an error?

Added a longer comment.


- Shreepadma


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14250/#review26304
---


On Sept. 20, 2013, 8:02 p.m., Shreepadma Venugopalan wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/14250/
 ---
 
 (Updated Sept. 20, 2013, 8:02 p.m.)
 
 
 Review request for hive and Brock Noland.
 
 
 Bugs: HIVE-4957
 https://issues.apache.org/jira/browse/HIVE-4957
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 Restricts the number of bit vectors used by Flajolet-Martin distinct value 
 estimator to 1024.
 
 
 Diffs
 -
 
   
 ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFComputeStats.java
  63110bb 
   ql/src/test/queries/clientnegative/compute_stats_long.q PRE-CREATION 
   ql/src/test/results/clientnegative/compute_stats_long.q.out PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/14250/diff/
 
 
 Testing
 ---
 
 Adds a new negative test case.
 
 
 Thanks,
 
 Shreepadma Venugopalan
 




[jira] [Commented] (HIVE-4957) Restrict number of bit vectors, to prevent out of Java heap memory

2013-10-01 Thread Shreepadma Venugopalan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783358#comment-13783358
 ] 

Shreepadma Venugopalan commented on HIVE-4957:
--

New patch addresses review comments.

 Restrict number of bit vectors, to prevent out of Java heap memory
 --

 Key: HIVE-4957
 URL: https://issues.apache.org/jira/browse/HIVE-4957
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.11.0
Reporter: Brock Noland
Assignee: Shreepadma Venugopalan
 Attachments: HIVE-4957.1.patch, HIVE-4957.2.patch


 normally increase number of bit vectors will increase calculation accuracy. 
 Let's say
 {noformat}
 select compute_stats(a, 40) from test_hive;
 {noformat}
 generally get better accuracy than
 {noformat}
 select compute_stats(a, 16) from test_hive;
 {noformat}
 But larger number of bit vectors also cause query run slower. When number of 
 bit vectors over 50, it won't help to increase accuracy anymore. But it still 
 increase memory usage, and crash Hive if number if too huge. Current Hive 
 doesn't prevent user use ridiculous large number of bit vectors in 
 'compute_stats' query.
 One example
 {noformat}
 select compute_stats(a, 9) from column_eight_types;
 {noformat}
 crashes Hive.
 {noformat}
 2012-12-20 23:21:52,247 Stage-1 map = 0%,  reduce = 0%
 2012-12-20 23:22:11,315 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.29 
 sec
 MapReduce Total cumulative CPU time: 290 msec
 Ended Job = job_1354923204155_0777 with errors
 Error during job, obtaining debugging information...
 Job Tracking URL: 
 http://cs-10-20-81-171.cloud.cloudera.com:8088/proxy/application_1354923204155_0777/
 Examining task ID: task_1354923204155_0777_m_00 (and more) from job 
 job_1354923204155_0777
 Task with the most failures(4): 
 -
 Task ID:
   task_1354923204155_0777_m_00
 URL:
   
 http://0.0.0.0:8088/taskdetails.jsp?jobid=job_1354923204155_0777tipid=task_1354923204155_0777_m_00
 -
 Diagnostic Messages for this Task:
 Error: Java heap space
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: Review Request 14250: HIVE-4957: Restrict number of bit vectors, to prevent out of Java heap memory

2013-10-01 Thread Shreepadma Venugopalan


 On Sept. 21, 2013, 2:27 a.m., Carl Steinbach wrote:
  ql/src/test/results/clientnegative/compute_stats_long.q.out, line 29
  https://reviews.apache.org/r/14250/diff/2/?file=354858#file354858line29
 
  The error message isn't making it back to the user because it's getting 
  generated at runtime on the cluster. Is it possible to bounds check this 
  parameter at compile time instead?

Today, the UDF framework doesn't support validation of inputs at compile time. 
Given the current framework, this is the best we can do. 


- Shreepadma


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14250/#review26313
---


On Sept. 20, 2013, 8:02 p.m., Shreepadma Venugopalan wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 https://reviews.apache.org/r/14250/
 ---
 
 (Updated Sept. 20, 2013, 8:02 p.m.)
 
 
 Review request for hive and Brock Noland.
 
 
 Bugs: HIVE-4957
 https://issues.apache.org/jira/browse/HIVE-4957
 
 
 Repository: hive-git
 
 
 Description
 ---
 
 Restricts the number of bit vectors used by Flajolet-Martin distinct value 
 estimator to 1024.
 
 
 Diffs
 -
 
   
 ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDAFComputeStats.java
  63110bb 
   ql/src/test/queries/clientnegative/compute_stats_long.q PRE-CREATION 
   ql/src/test/results/clientnegative/compute_stats_long.q.out PRE-CREATION 
 
 Diff: https://reviews.apache.org/r/14250/diff/
 
 
 Testing
 ---
 
 Adds a new negative test case.
 
 
 Thanks,
 
 Shreepadma Venugopalan
 




[jira] [Commented] (HIVE-5325) Implement statistics providing ORC writer and reader interfaces

2013-10-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783363#comment-13783363
 ] 

Hudson commented on HIVE-5325:
--

SUCCESS: Integrated in Hive-trunk-hadoop1-ptest #188 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop1-ptest/188/])
HIVE-5325 : Implement statistics providing ORC writer and reader interfaces 
(Prasanth J via Ashutosh Chauhan) (hashutosh: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1528108)
* 
/hive/trunk/ql/src/gen/protobuf/gen-java/org/apache/hadoop/hive/ql/io/orc/OrcProto.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/BinaryColumnStatistics.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/ColumnStatisticsImpl.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcOutputFormat.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/ReaderImpl.java
* 
/hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/StringColumnStatistics.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/WriterImpl.java
* /hive/trunk/ql/src/java/org/apache/hadoop/hive/ql/util/JavaDataModel.java
* /hive/trunk/ql/src/protobuf/org/apache/hadoop/hive/ql/io/orc/orc_proto.proto
* /hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcFile.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcNullOptimization.java
* 
/hive/trunk/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestOrcSerDeStats.java
* /hive/trunk/ql/src/test/resources/orc-file-dump-dictionary-threshold.out
* /hive/trunk/ql/src/test/resources/orc-file-dump.out


 Implement statistics providing ORC writer and reader interfaces
 ---

 Key: HIVE-5325
 URL: https://issues.apache.org/jira/browse/HIVE-5325
 Project: Hive
  Issue Type: New Feature
Affects Versions: 0.13.0
Reporter: Prasanth J
Assignee: Prasanth J
  Labels: orcfile, statistics
 Fix For: 0.13.0

 Attachments: HIVE-5325.1.patch.txt, HIVE-5325.2.patch.txt, 
 HIVE-5325.3.patch.txt, HIVE-5325-java-only.1.patch.txt, 
 HIVE-5325-java-only.2.patch.txt, HIVE-5325-java-only.3.patch.txt


 HIVE-5324 adds new interfaces that can be implemented by ORC reader/writer to 
 provide statistics. Writer provided statistics is used to update 
 table/partition level statistics in metastore. Reader provided statistics can 
 be used for reducer estimation, CBO etc. in the absence of metastore 
 statistics.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5400) Allow admins to disable compile and other commands

2013-10-01 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783365#comment-13783365
 ] 

Edward Capriolo commented on HIVE-5400:
---

[~brocknoland]

Lets slow down a second. I think patching in this support for only hs2 is short 
sided. I think we do want to bring this code all the way down to the CLI, even 
if a local mode CLI can avoid this protection, I think completely skipping the 
local mode code path is the wrong way.  Also I do not like the hard codes here:

{code}
String[] commands = {set, dfs, add, delete};
{code}

We have already abstractions like Processors and a class that acts as a 
switchboard, I think they should have a way of describing what types of 
commands they provide (enum possibly), and then letting the switch board make 
the choice.  

Lets come up with a clean design that makes sense in the long run and is 
manageable not just something we hack in. 

 Allow admins to disable compile and other commands
 --

 Key: HIVE-5400
 URL: https://issues.apache.org/jira/browse/HIVE-5400
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Edward Capriolo
 Attachments: HIVE-5400.patch


 From here: 
 https://issues.apache.org/jira/browse/HIVE-5253?focusedCommentId=13782220page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13782220
  I think we should afford admins who want to disable this functionality the 
 ability to do so. Since such admins might want to disable other commands such 
 as add or dfs, it wouldn't be much trouble to allow them to do this as well. 
 For example we could have a configuration option hive.available.commands 
 (or similar) which specified add,set,delete,reset, etc by default. Then check 
 this value in CommandProcessorFactory. It would probably make sense to add 
 this property to the restrict list.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5400) Allow admins to disable compile and other commands

2013-10-01 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783368#comment-13783368
 ] 

Edward Capriolo commented on HIVE-5400:
---

Did not mean hack in in a bad way. But we do not want a lot of strings and 
have connect the dots between seemingly unrelated classes as to why a feature 
is working or not. 

 Allow admins to disable compile and other commands
 --

 Key: HIVE-5400
 URL: https://issues.apache.org/jira/browse/HIVE-5400
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Edward Capriolo
 Attachments: HIVE-5400.patch


 From here: 
 https://issues.apache.org/jira/browse/HIVE-5253?focusedCommentId=13782220page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13782220
  I think we should afford admins who want to disable this functionality the 
 ability to do so. Since such admins might want to disable other commands such 
 as add or dfs, it wouldn't be much trouble to allow them to do this as well. 
 For example we could have a configuration option hive.available.commands 
 (or similar) which specified add,set,delete,reset, etc by default. Then check 
 this value in CommandProcessorFactory. It would probably make sense to add 
 this property to the restrict list.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5403) Move loading of filesystem, ugi, metastore client to hive session

2013-10-01 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783378#comment-13783378
 ] 

Gunther Hagleitner commented on HIVE-5403:
--

Looks good to me. +1

 Move loading of filesystem, ugi, metastore client to hive session
 -

 Key: HIVE-5403
 URL: https://issues.apache.org/jira/browse/HIVE-5403
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.13.0
Reporter: Vikram Dixit K
Assignee: Vikram Dixit K
 Attachments: HIVE-5403.1.patch, HIVE-5403.2.patch


 As part of HIVE-5184, the metastore connection, loading filesystem were done 
 as part of the tez session so as to speed up query times while paying a cost 
 at startup. We can do this more generally in hive to apply to both the 
 mapreduce and tez side of things.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5400) Allow admins to disable compile and other commands

2013-10-01 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5400:
---

Status: Open  (was: Patch Available)

 Allow admins to disable compile and other commands
 --

 Key: HIVE-5400
 URL: https://issues.apache.org/jira/browse/HIVE-5400
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Edward Capriolo
 Attachments: HIVE-5400.patch


 From here: 
 https://issues.apache.org/jira/browse/HIVE-5253?focusedCommentId=13782220page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13782220
  I think we should afford admins who want to disable this functionality the 
 ability to do so. Since such admins might want to disable other commands such 
 as add or dfs, it wouldn't be much trouble to allow them to do this as well. 
 For example we could have a configuration option hive.available.commands 
 (or similar) which specified add,set,delete,reset, etc by default. Then check 
 this value in CommandProcessorFactory. It would probably make sense to add 
 this property to the restrict list.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5413) StorageDelegationAuthorizationProvider uses non-existent org.apache.hive.hcatalog.hbase.HBaseHCatStorageHandler

2013-10-01 Thread Eugene Koifman (JIRA)
Eugene Koifman created HIVE-5413:


 Summary: StorageDelegationAuthorizationProvider uses non-existent 
org.apache.hive.hcatalog.hbase.HBaseHCatStorageHandler
 Key: HIVE-5413
 URL: https://issues.apache.org/jira/browse/HIVE-5413
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.12.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Fix For: 0.12.0


org.apache.hive.hcatalog.security.StorageDelegationAuthorizationProvider
has a block like this:

  static {
registerAuthProvider(org.apache.hadoop.hive.hbase.HBaseStorageHandler,
  org.apache.hive.hcatalog.hbase.HBaseAuthorizationProvider);

registerAuthProvider(org.apache.hive.hcatalog.hbase.HBaseHCatStorageHandler,
  org.apache.hive.hcatalog.hbase.HBaseAuthorizationProvider);
  }


In reality, HBaseHCatStorageHandler and HBaseAuthorizationProvider only exist 
in org.apache.hcatalog

This should be fixed.  Also, should use Foo.class.getName() instead of strings 
to make this a compile time check



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5274) HCatalog package renaming backward compatibility follow-up

2013-10-01 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-5274:
-

Description: 
As part of HIVE-4869, the hbase storage handler in hcat was moved to 
org.apache.hive.hcatalog, and then put back to org.apache.hcatalog since it was 
intended to be deprecated as well.

However, it imports and uses several org.apache.hive.hcatalog classes. This 
needs to be changed to use org.apache.hcatalog classes.

==

Note : The above is a complete description of this issue in and of by itself, 
the following is more details on the backward-compatibility goal I have(not 
saying that each of these things are violated) : 

a) People using org.apache.hcatalog packages should continue being able to use 
that package, and see no difference at compile time or runtime. All code here 
is considered deprecated, and will be gone by the time hive 0.14 rolls around. 
Additionally, org.apache.hcatalog should behave as if it were 0.11 for all 
compatibility purposes.

b) People using org.apache.hive.hcatalog packages should never have an 
org.apache.hcatalog dependency injected in.

Thus,

It is okay for org.apache.hcatalog to use org.apache.hive.hcatalog packages 
internally (say HCatUtil, for example), as long as any interfaces only expose 
org.apache.hcatalog.\* For tests that test org.apache.hcatalog.\*, we must be 
capable of testing it from a pure org.apache.hcatalog.\* world.

It is never okay for org.apache.hive.hcatalog to use org.apache.hcatalog, even 
in tests.

One addition/clarification:
any application using org.apache.hcatalog.hbase.HBaseHCatStorageHandler must 
only use classes from org.apache.hcatalog.  For example 
org.apache.hcatalog.mapreduce.OutputJobInfo rather than the new 
org.apache.hive.hcatalog.mapreduce.OutputJobInfo.

  was:
As part of HIVE-4869, the hbase storage handler in hcat was moved to 
org.apache.hive.hcatalog, and then put back to org.apache.hcatalog since it was 
intended to be deprecated as well.

However, it imports and uses several org.apache.hive.hcatalog classes. This 
needs to be changed to use org.apache.hcatalog classes.

==

Note : The above is a complete description of this issue in and of by itself, 
the following is more details on the backward-compatibility goal I have(not 
saying that each of these things are violated) : 

a) People using org.apache.hcatalog packages should continue being able to use 
that package, and see no difference at compile time or runtime. All code here 
is considered deprecated, and will be gone by the time hive 0.14 rolls around. 
Additionally, org.apache.hcatalog should behave as if it were 0.11 for all 
compatibility purposes.

b) People using org.apache.hive.hcatalog packages should never have an 
org.apache.hcatalog dependency injected in.

Thus,

It is okay for org.apache.hcatalog to use org.apache.hive.hcatalog packages 
internally (say HCatUtil, for example), as long as any interfaces only expose 
org.apache.hcatalog.\* For tests that test org.apache.hcatalog.\*, we must be 
capable of testing it from a pure org.apache.hcatalog.\* world.

It is never okay for org.apache.hive.hcatalog to use org.apache.hcatalog, even 
in tests.


 HCatalog package renaming backward compatibility follow-up
 --

 Key: HIVE-5274
 URL: https://issues.apache.org/jira/browse/HIVE-5274
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.12.0
Reporter: Sushanth Sowmyan
Assignee: Sushanth Sowmyan
 Fix For: 0.12.0

 Attachments: HIVE-5274.2.patch, HIVE-5274.3.patch, HIVE-5274.4.patch


 As part of HIVE-4869, the hbase storage handler in hcat was moved to 
 org.apache.hive.hcatalog, and then put back to org.apache.hcatalog since it 
 was intended to be deprecated as well.
 However, it imports and uses several org.apache.hive.hcatalog classes. This 
 needs to be changed to use org.apache.hcatalog classes.
 ==
 Note : The above is a complete description of this issue in and of by itself, 
 the following is more details on the backward-compatibility goal I have(not 
 saying that each of these things are violated) : 
 a) People using org.apache.hcatalog packages should continue being able to 
 use that package, and see no difference at compile time or runtime. All code 
 here is considered deprecated, and will be gone by the time hive 0.14 rolls 
 around. Additionally, org.apache.hcatalog should behave as if it were 0.11 
 for all compatibility purposes.
 b) People using org.apache.hive.hcatalog packages should never have an 
 org.apache.hcatalog dependency injected in.
 Thus,
 It is okay for org.apache.hcatalog to use org.apache.hive.hcatalog packages 
 internally (say HCatUtil, for example), as long as any interfaces only expose 
 org.apache.hcatalog.\* For tests 

Re: did you always have to log in to phabricator

2013-10-01 Thread Edward Capriolo
Previously we all agreed that the poster of the patch could chose the
review board of their choice. When this was agreed upon review board did
not require a login to view a review.

I will likely start a PMC vote on this issue soon. If the vote passes we
will remove mention of phabricator from the hive documentation, and not
accept patches for review not posted on Apache's review board.

That being said, if you are invested in phabricator as our review system,
(ring ring committers that put this system into place) you had better get
the wheels moving on removing the required password.



On Tue, Oct 1, 2013 at 4:27 PM, Sean Busbey bus...@cloudera.com wrote:

 Well, the ASF review board does currently work. Our dev guide suggests that
 either it or fabricator are fine. Generally all of my own jira submissions
 include a RB, so I'm fairly confident it works (for the features that RB
 has generally).

 I presume something as strong as a statement of we're going to stop using
 fabrictor as a project would require a PMC vote?

 -Sean


 On Tue, Oct 1, 2013 at 1:15 PM, Edward Capriolo edlinuxg...@gmail.com
 wrote:

  I do not know what to say on this, other then we are stuck between a rock
  and a hard place. I would say we should just stop using fabricator all
  together, but then again we already have two tickets opened with
  ASF-infrastructure not going anywhere for months (moving our site to a
 CMS,
  fixing the broken confluence to wiki publishing) so moving the process
  officially to some ASF review board type thing might have similar issues.
 
  What we might have to do is host phabricator ourselves like we are
 hosting
  our unit testing ourselves. I do not know what else to say on this.
 
 
  On Sat, Sep 28, 2013 at 8:55 PM, Sean Busbey bus...@cloudera.com
 wrote:
 
   Bump. Any update on this?
  
  
   On Tue, Sep 17, 2013 at 12:41 PM, Edward Capriolo 
 edlinuxg...@gmail.com
   wrote:
  
I do not like this. It is inconvenience when using a mobile device,
 but
more importantly it does not seem very transparent to our end users.
  For
example, a user is browsing jira they may want to review the code
 only
  on
review board (not yet attached to the issue), they should not be
 forced
   to
sign up to help in the process.
   
Would anyone from facebook care to chime in here? I think we all like
fabricator for the most part. Our docs suggest this fabricator is our
de-facto review system. As an ASF project I do not think requiring a
   login
on some external service even to review a jira is correct.
   
   
On Tue, Sep 17, 2013 at 12:27 PM, Xuefu Zhang xzh...@cloudera.com
   wrote:
   
 Yeah. I used to be able to view w/o login, but now I am not.


 On Tue, Sep 17, 2013 at 7:27 AM, Brock Noland br...@cloudera.com
wrote:

  Personally I prefer Review Board.
 
  On Tue, Sep 17, 2013 at 8:31 AM, Edward Capriolo 
edlinuxg...@gmail.com
  wrote:
   I never remeber having to log into phabricator to view a patch.
  Has
 this
   changed recently? I believe that having to create an external
   account
 to
   view a patch in progress is not something we should be doing.
 
 
 
  --
  Apache MRUnit - Unit testing MapReduce -
 http://mrunit.apache.org
 

   
  
  
  
   --
   Sean
  
 



 --
 Sean



[jira] [Commented] (HIVE-5400) Allow admins to disable compile and other commands

2013-10-01 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783391#comment-13783391
 ] 

Brock Noland commented on HIVE-5400:


I am fine with implementing this for both HS2 and CLI/HS1 and since we use 
strings for CLi, HS1, and HS2 at present. I can add an enum which will be used 
by all three.

 Allow admins to disable compile and other commands
 --

 Key: HIVE-5400
 URL: https://issues.apache.org/jira/browse/HIVE-5400
 Project: Hive
  Issue Type: Sub-task
Reporter: Brock Noland
Assignee: Edward Capriolo
 Attachments: HIVE-5400.patch


 From here: 
 https://issues.apache.org/jira/browse/HIVE-5253?focusedCommentId=13782220page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13782220
  I think we should afford admins who want to disable this functionality the 
 ability to do so. Since such admins might want to disable other commands such 
 as add or dfs, it wouldn't be much trouble to allow them to do this as well. 
 For example we could have a configuration option hive.available.commands 
 (or similar) which specified add,set,delete,reset, etc by default. Then check 
 this value in CommandProcessorFactory. It would probably make sense to add 
 this property to the restrict list.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5413) StorageDelegationAuthorizationProvider uses non-existent org.apache.hive.hcatalog.hbase.HBaseHCatStorageHandler

2013-10-01 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-5413:
-

Description: 
org.apache.hive.hcatalog.security.StorageDelegationAuthorizationProvider
has a block like this:

  static {
registerAuthProvider(org.apache.hadoop.hive.hbase.HBaseStorageHandler,
  org.apache.hive.hcatalog.hbase.HBaseAuthorizationProvider);

registerAuthProvider(org.apache.hive.hcatalog.hbase.HBaseHCatStorageHandler,
  org.apache.hive.hcatalog.hbase.HBaseAuthorizationProvider);
  }


In reality, HBaseHCatStorageHandler and HBaseAuthorizationProvider only exist 
in org.apache.hcatalog

This should be fixed.  Also, should use Foo.class.getName() instead of strings 
to make this a compile time check


Also,
hcatalog/src/test/e2e/hcatalog/tests/pig.conf  hadoop.conf have the same 
problem.  
In addition, the tests affected in pig.conf/hadoop.conf should use 
org.apache.hcatalog.pig.HCatLoader/HCatStorer.

Finally, hadoop.conf#Hadoop_HBase is using 
org.apache.hive.hcatalog.utils.HBaseReadWrite which internally refers to 
org.apache.hive.hcatalog.* classes.  The later should only use 
org.apache.hcatalog.* since it's using HBaseHCatStoreageHandler.  Also, should 
move HBaseReadWrite to org.apache.hcatalog for clarity.
(see the last paragraph of the Description of HIVE-5274)




  was:
org.apache.hive.hcatalog.security.StorageDelegationAuthorizationProvider
has a block like this:

  static {
registerAuthProvider(org.apache.hadoop.hive.hbase.HBaseStorageHandler,
  org.apache.hive.hcatalog.hbase.HBaseAuthorizationProvider);

registerAuthProvider(org.apache.hive.hcatalog.hbase.HBaseHCatStorageHandler,
  org.apache.hive.hcatalog.hbase.HBaseAuthorizationProvider);
  }


In reality, HBaseHCatStorageHandler and HBaseAuthorizationProvider only exist 
in org.apache.hcatalog

This should be fixed.  Also, should use Foo.class.getName() instead of strings 
to make this a compile time check


 StorageDelegationAuthorizationProvider uses non-existent 
 org.apache.hive.hcatalog.hbase.HBaseHCatStorageHandler
 ---

 Key: HIVE-5413
 URL: https://issues.apache.org/jira/browse/HIVE-5413
 Project: Hive
  Issue Type: Bug
  Components: HCatalog
Affects Versions: 0.12.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Fix For: 0.12.0


 org.apache.hive.hcatalog.security.StorageDelegationAuthorizationProvider
 has a block like this:
   static {
 registerAuthProvider(org.apache.hadoop.hive.hbase.HBaseStorageHandler,
   org.apache.hive.hcatalog.hbase.HBaseAuthorizationProvider);
 
 registerAuthProvider(org.apache.hive.hcatalog.hbase.HBaseHCatStorageHandler,
   org.apache.hive.hcatalog.hbase.HBaseAuthorizationProvider);
   }
 In reality, HBaseHCatStorageHandler and HBaseAuthorizationProvider only exist 
 in org.apache.hcatalog
 This should be fixed.  Also, should use Foo.class.getName() instead of 
 strings to make this a compile time check
 Also,
 hcatalog/src/test/e2e/hcatalog/tests/pig.conf  hadoop.conf have the same 
 problem.  
 In addition, the tests affected in pig.conf/hadoop.conf should use 
 org.apache.hcatalog.pig.HCatLoader/HCatStorer.
 Finally, hadoop.conf#Hadoop_HBase is using 
 org.apache.hive.hcatalog.utils.HBaseReadWrite which internally refers to 
 org.apache.hive.hcatalog.* classes.  The later should only use 
 org.apache.hcatalog.* since it's using HBaseHCatStoreageHandler.  Also, 
 should move HBaseReadWrite to org.apache.hcatalog for clarity.
 (see the last paragraph of the Description of HIVE-5274)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5405) Need to implement PersistenceDelegate for org.antlr.runtime.CommonToken

2013-10-01 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783398#comment-13783398
 ] 

Ashutosh Chauhan commented on HIVE-5405:


I think it might make sense to have this in 0.12 as well. Since, 0.12 users 
won't enjoy Kryo benefits. 
cc: [~thejas] What do you think?

 Need to implement PersistenceDelegate for org.antlr.runtime.CommonToken
 ---

 Key: HIVE-5405
 URL: https://issues.apache.org/jira/browse/HIVE-5405
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: shanyu zhao
Assignee: shanyu zhao
 Attachments: HIVE-5405.patch


 Prior to HIVE-1511, running hive join operation results in the following 
 exception:
 java.lang.RuntimeException: Cannot serialize object
 at 
 org.apache.hadoop.hive.ql.exec.Utilities$1.exceptionThrown(Utilities.
 java:639)
 at java.beans.XMLEncoder.writeStatement(XMLEncoder.java:426)
 ...
 Caused by: java.lang.InstantiationException: org.antlr.runtime.CommonToken
 at java.lang.Class.newInstance0(Class.java:357)
 at java.lang.Class.newInstance(Class.java:325)
 at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
 sorImpl.java:43)
 HIVE-1511 introduced a new (and set to default) hive plan serialization 
 format Kryo, which fixed this problem by implementing the Kryo serializer for 
 CommonToken. However, if we set the following in configuration file:
 property
 namehive.plan.serialization.format/name
 valuejavaXML/value
 /property
 We'll see the same failure as before. We need to implement a 
 PersistenceDelegate for the situation when javaXML is set to serialization 
 format.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-2584) Alter table should accept database name

2013-10-01 Thread Rahul Challapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-2584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783399#comment-13783399
 ] 

Rahul Challapalli commented on HIVE-2584:
-

Is there any update on this?

 Alter table should accept database name
 ---

 Key: HIVE-2584
 URL: https://issues.apache.org/jira/browse/HIVE-2584
 Project: Hive
  Issue Type: Improvement
Affects Versions: 0.7.1
Reporter: Bharath Mundlapudi

 It would be nice if alter table accepts database name.
 For example:
 This would be more useful in certain usecases:
  
 alter table DB.Tbl set location location;
 rather than 2 statements.
 use DB;
 alter table Tbl set location location;
  



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5405) Need to implement PersistenceDelegate for org.antlr.runtime.CommonToken

2013-10-01 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783407#comment-13783407
 ] 

Thejas M Nair commented on HIVE-5405:
-

Yes, I will include it in 0.12 . This looks like a reasonably safe fix.


 Need to implement PersistenceDelegate for org.antlr.runtime.CommonToken
 ---

 Key: HIVE-5405
 URL: https://issues.apache.org/jira/browse/HIVE-5405
 Project: Hive
  Issue Type: Bug
Affects Versions: 0.12.0
Reporter: shanyu zhao
Assignee: shanyu zhao
 Attachments: HIVE-5405.patch


 Prior to HIVE-1511, running hive join operation results in the following 
 exception:
 java.lang.RuntimeException: Cannot serialize object
 at 
 org.apache.hadoop.hive.ql.exec.Utilities$1.exceptionThrown(Utilities.
 java:639)
 at java.beans.XMLEncoder.writeStatement(XMLEncoder.java:426)
 ...
 Caused by: java.lang.InstantiationException: org.antlr.runtime.CommonToken
 at java.lang.Class.newInstance0(Class.java:357)
 at java.lang.Class.newInstance(Class.java:325)
 at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
 sorImpl.java:43)
 HIVE-1511 introduced a new (and set to default) hive plan serialization 
 format Kryo, which fixed this problem by implementing the Kryo serializer for 
 CommonToken. However, if we set the following in configuration file:
 property
 namehive.plan.serialization.format/name
 valuejavaXML/value
 /property
 We'll see the same failure as before. We need to implement a 
 PersistenceDelegate for the situation when javaXML is set to serialization 
 format.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5114) add a target to run tests without rebuilding them

2013-10-01 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-5114:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks, Sergey  Brock!

 add a target to run tests without rebuilding them
 -

 Key: HIVE-5114
 URL: https://issues.apache.org/jira/browse/HIVE-5114
 Project: Hive
  Issue Type: Improvement
  Components: Build Infrastructure
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.13.0

 Attachments: HIVE-5114.2.patch, HIVE-5114.D12399.1.patch


 it is sometimes annoying that each ant test ... cleans and rebuilds the 
 tests. It is should be relatively easy to add a testonly target that would 
 just run the test(s) on the existing build



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4941) PTest2 Investigate Ignores

2013-10-01 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783412#comment-13783412
 ] 

Ashutosh Chauhan commented on HIVE-4941:


Whats the status of these tasks? Are all tests now running with PTest2 ?

 PTest2 Investigate Ignores
 --

 Key: HIVE-4941
 URL: https://issues.apache.org/jira/browse/HIVE-4941
 Project: Hive
  Issue Type: Task
Reporter: Brock Noland
Assignee: Brock Noland
Priority: Minor

 Currently we excluding the following tests:
 unitTests.exclude = TestHiveMetaStore TestSerDe TestBeeLineDriver 
 TestHiveServer2Concurrency TestJdbcDriver2 TestHiveServer2Concurrency 
 TestBeeLineDriver
 some of them we got from the build files but I am not sure about 
 TestJdbcDriver2 for example. We should investigate why these are excluded.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5383) PTest2 should allow you to specify ant properties which will only be added to the command when a test is executed

2013-10-01 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783419#comment-13783419
 ] 

Ashutosh Chauhan commented on HIVE-5383:


Didn't look the patch in detail, but the feature is useful. +1

 PTest2 should allow you to specify ant properties which will only be added to 
 the command when a test is executed
 -

 Key: HIVE-5383
 URL: https://issues.apache.org/jira/browse/HIVE-5383
 Project: Hive
  Issue Type: Improvement
Affects Versions: 0.13.0
Reporter: Brock Noland
Assignee: Brock Noland
Priority: Minor
 Attachments: HIVE-5383.patch


 It'd be nice if we could specify things like:
 -DgrammarBuild.notRequired=true -Dskip.javadoc=true
 when we actually execute a test.
 NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4907) Allow additional tests cases to be specified with -Dtestcase

2013-10-01 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4907?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783421#comment-13783421
 ] 

Ashutosh Chauhan commented on HIVE-4907:


This feature makes sense. Useful for both ptest2 as well as for running 
standalone tests. Is there any work left on this or is it ready to go in?

 Allow additional tests cases to be specified with -Dtestcase
 

 Key: HIVE-4907
 URL: https://issues.apache.org/jira/browse/HIVE-4907
 Project: Hive
  Issue Type: Improvement
  Components: Testing Infrastructure
Reporter: Brock Noland
Assignee: Brock Noland
 Attachments: HIVE-4907.patch


 Currently we only allow a single tests case to be specified with -Dtestcase. 
 It'd be ideal if we could add on additional test cases as this would allow us 
 to batch the unit tests in ptest2.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5391) make ORC predicate pushdown work with vectorization

2013-10-01 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783422#comment-13783422
 ] 

Sergey Shelukhin commented on HIVE-5391:


Discussed out of bounds... summary: the same predicates as for normal reading; 
ORC currently only uses predicates during row-group loading, so no 
vectorization specific code should be necessary during row processing.

 make ORC predicate pushdown work with vectorization
 ---

 Key: HIVE-5391
 URL: https://issues.apache.org/jira/browse/HIVE-5391
 Project: Hive
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-5391.01-vectorization.patch, 
 HIVE-5391-vectorization.patch


 Vectorized execution doesn't utilize ORC predicate pushdown. It should.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5391) make ORC predicate pushdown work with vectorization

2013-10-01 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13783426#comment-13783426
 ] 

Sergey Shelukhin commented on HIVE-5391:


Review is at https://reviews.facebook.net/D13203

 make ORC predicate pushdown work with vectorization
 ---

 Key: HIVE-5391
 URL: https://issues.apache.org/jira/browse/HIVE-5391
 Project: Hive
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-5391.01-vectorization.patch, 
 HIVE-5391-vectorization.patch


 Vectorized execution doesn't utilize ORC predicate pushdown. It should.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5391) make ORC predicate pushdown work with vectorization

2013-10-01 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-5391:
---

Attachment: HIVE-5391.patch

 make ORC predicate pushdown work with vectorization
 ---

 Key: HIVE-5391
 URL: https://issues.apache.org/jira/browse/HIVE-5391
 Project: Hive
  Issue Type: Improvement
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HIVE-5391.01-vectorization.patch, HIVE-5391.patch, 
 HIVE-5391-vectorization.patch


 Vectorized execution doesn't utilize ORC predicate pushdown. It should.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-4669) Make username available to semantic analyzer hooks

2013-10-01 Thread Shreepadma Venugopalan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shreepadma Venugopalan updated HIVE-4669:
-

Status: Open  (was: Patch Available)

 Make username available to semantic analyzer hooks
 --

 Key: HIVE-4669
 URL: https://issues.apache.org/jira/browse/HIVE-4669
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.11.0, 0.10.0
Reporter: Shreepadma Venugopalan
Assignee: Shreepadma Venugopalan
 Attachments: HIVE-4669.1.patch, HIVE-4669.2.patch


 Make username available to the semantic analyzer hooks.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


  1   2   >