[jira] [Commented] (HIVE-5372) Refactor TypeInfo and PrimitiveTypeEntry class hierachy to eliminate info repetition

2013-10-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786962#comment-13786962
 ] 

Hive QA commented on HIVE-5372:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12606973/HIVE-5372.2.patch

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 4052 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.serde2.typeinfo.TestTypeInfoUtils.testVarcharNoParams
org.apache.hive.jdbc.TestJdbcDriver2.testDataTypes
org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1039/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1039/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

> Refactor TypeInfo and PrimitiveTypeEntry class hierachy to eliminate info 
> repetition
> 
>
> Key: HIVE-5372
> URL: https://issues.apache.org/jira/browse/HIVE-5372
> Project: Hive
>  Issue Type: Improvement
>  Components: Types
>Affects Versions: 0.12.0
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
> Fix For: 0.13.0
>
> Attachments: HIVE-5372.1.patch, HIVE-5372.2.patch, HIVE-5372.patch
>
>
> TypeInfo with its sub-classes and PrimititiveTypeEntry class seem having 
> repetitive information, such as type names and type params. It will be good 
> if we can streamline the information organization.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5087) Rename npath UDF to matchpath

2013-10-04 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-5087:
-

Attachment: HIVE-5087.2.patch

Using the right patch name format this time...

> Rename npath UDF to matchpath
> -
>
> Key: HIVE-5087
> URL: https://issues.apache.org/jira/browse/HIVE-5087
> Project: Hive
>  Issue Type: Bug
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
>Priority: Blocker
> Fix For: 0.12.0
>
> Attachments: HIVE-5087.1.patch.txt, HIVE-5087.2.patch, 
> HIVE-5087.99.patch.txt, HIVE-5087-matchpath.1.patch.txt, 
> HIVE-5087-matchpath.2.patch, HIVE-5087.patch.txt, HIVE-5087.patch.txt, 
> regex_path.diff
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5087) Rename npath UDF to matchpath

2013-10-04 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-5087:
-

Status: Open  (was: Patch Available)

> Rename npath UDF to matchpath
> -
>
> Key: HIVE-5087
> URL: https://issues.apache.org/jira/browse/HIVE-5087
> Project: Hive
>  Issue Type: Bug
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
>Priority: Blocker
> Fix For: 0.12.0
>
> Attachments: HIVE-5087.1.patch.txt, HIVE-5087.2.patch, 
> HIVE-5087.99.patch.txt, HIVE-5087-matchpath.1.patch.txt, 
> HIVE-5087-matchpath.2.patch, HIVE-5087.patch.txt, HIVE-5087.patch.txt, 
> regex_path.diff
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5087) Rename npath UDF to matchpath

2013-10-04 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-5087:
-

Status: Patch Available  (was: Open)

> Rename npath UDF to matchpath
> -
>
> Key: HIVE-5087
> URL: https://issues.apache.org/jira/browse/HIVE-5087
> Project: Hive
>  Issue Type: Bug
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
>Priority: Blocker
> Fix For: 0.12.0
>
> Attachments: HIVE-5087.1.patch.txt, HIVE-5087.2.patch, 
> HIVE-5087.99.patch.txt, HIVE-5087-matchpath.1.patch.txt, 
> HIVE-5087-matchpath.2.patch, HIVE-5087.patch.txt, HIVE-5087.patch.txt, 
> regex_path.diff
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5087) Rename npath UDF to matchpath

2013-10-04 Thread Carl Steinbach (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carl Steinbach updated HIVE-5087:
-

Attachment: HIVE-5087-matchpath.2.patch

Rebasing the old patch in response to some test and PTF changes.

> Rename npath UDF to matchpath
> -
>
> Key: HIVE-5087
> URL: https://issues.apache.org/jira/browse/HIVE-5087
> Project: Hive
>  Issue Type: Bug
>Reporter: Edward Capriolo
>Assignee: Edward Capriolo
>Priority: Blocker
> Fix For: 0.12.0
>
> Attachments: HIVE-5087.1.patch.txt, HIVE-5087.99.patch.txt, 
> HIVE-5087-matchpath.1.patch.txt, HIVE-5087-matchpath.2.patch, 
> HIVE-5087.patch.txt, HIVE-5087.patch.txt, regex_path.diff
>
>




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5372) Refactor TypeInfo and PrimitiveTypeEntry class hierachy to eliminate info repetition

2013-10-04 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-5372:
--

Attachment: HIVE-5372.2.patch

Patch #2 to fix the failed test cases.

Note that this update hasn't addressed review comments. Also, I'm not sure of a 
couple of test cases, so I will be looking more, even if they pass this time.

> Refactor TypeInfo and PrimitiveTypeEntry class hierachy to eliminate info 
> repetition
> 
>
> Key: HIVE-5372
> URL: https://issues.apache.org/jira/browse/HIVE-5372
> Project: Hive
>  Issue Type: Improvement
>  Components: Types
>Affects Versions: 0.12.0
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
> Fix For: 0.13.0
>
> Attachments: HIVE-5372.1.patch, HIVE-5372.2.patch, HIVE-5372.patch
>
>
> TypeInfo with its sub-classes and PrimititiveTypeEntry class seem having 
> repetitive information, such as type names and type params. It will be good 
> if we can streamline the information organization.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4888) listPartitionsByFilter doesn't support lt/gt/lte/gte

2013-10-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786945#comment-13786945
 ] 

Hive QA commented on HIVE-4888:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12606861/D13101.6.patch

{color:green}SUCCESS:{color} +1 4054 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1038/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1038/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

> listPartitionsByFilter doesn't support lt/gt/lte/gte
> 
>
> Key: HIVE-4888
> URL: https://issues.apache.org/jira/browse/HIVE-4888
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: D13101.1.patch, D13101.2.patch, D13101.3.patch, 
> D13101.4.patch, D13101.5.patch, D13101.6.patch, HIVE-4888.00.patch, 
> HIVE-4888.01.patch, HIVE-4888.04.patch, HIVE-4888.05.patch, 
> HIVE-4888.06.patch, HIVE-4888.on-top-of-4914.patch
>
>
> Filter pushdown could be improved. Based on my experiments there's no 
> reasonable way to do it with DN 2.0, due to DN bug in substring and 
> Collection.get(int) not being implemented.
> With version as low as 2.1 we can use values.get on partition to extract 
> values to compare to. Type compatibility is an issue, but is easy for strings 
> and integral values.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5155) Support secure proxy user access to HiveServer2

2013-10-04 Thread Prasad Mujumdar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786933#comment-13786933
 ] 

Prasad Mujumdar commented on HIVE-5155:
---

Hey [~thejas] no problem. I agree that it would be a bit risky to add large 
feature just before the RC.
Let's try get this into 0.13. Please take a look when you get a chance. Thanks!


> Support secure proxy user access to HiveServer2
> ---
>
> Key: HIVE-5155
> URL: https://issues.apache.org/jira/browse/HIVE-5155
> Project: Hive
>  Issue Type: Improvement
>  Components: Authentication, HiveServer2, JDBC
>Affects Versions: 0.12.0
>Reporter: Prasad Mujumdar
>Assignee: Prasad Mujumdar
> Attachments: HIVE-5155-1-nothrift.patch, HIVE-5155.1.patch, 
> HIVE-5155.2.patch, HIVE-5155.3.patch, HIVE-5155-noThrift.2.patch, 
> HIVE-5155-noThrift.4.patch, ProxyAuth.jar, ProxyAuth.java, ProxyAuth.results
>
>
> The HiveServer2 can authenticate a client using via Kerberos and impersonate 
> the connecting user with underlying secure hadoop. This becomes a gateway for 
> a remote client to access secure hadoop cluster. Now this works fine for when 
> the client obtains Kerberos ticket and directly connects to HiveServer2. 
> There's another big use case for middleware tools where the end user wants to 
> access Hive via another server. For example Oozie action or Hue submitting 
> queries or a BI tool server accessing to HiveServer2. In these cases, the 
> third party server doesn't have end user's Kerberos credentials and hence it 
> can't submit queries to HiveServer2 on behalf of the end user.
> This ticket is for enabling proxy access to HiveServer2 for third party tools 
> on behalf of end users. There are two parts of the solution proposed in this 
> ticket:
> 1) Delegation token based connection for Oozie (OOZIE-1457)
> This is the common mechanism for Hadoop ecosystem components. Hive Remote 
> Metastore and HCatalog already support this. This is suitable for tool like 
> Oozie that submits the MR jobs as actions on behalf of its client. Oozie 
> already uses similar mechanism for Metastore/HCatalog access.
> 2) Direct proxy access for privileged hadoop users
> The delegation token implementation can be a challenge for non-hadoop 
> (especially non-java) components. This second part enables a privileged user 
> to directly specify an alternate session user during the connection. If the 
> connecting user has hadoop level privilege to impersonate the requested 
> userid, then HiveServer2 will run the session as that requested user. For 
> example, user Hue is allowed to impersonate user Bob (via core-site.xml proxy 
> user configuration). Then user Hue can connect to HiveServer2 and specify Bob 
> as session user via a session property. HiveServer2 will verify Hue's proxy 
> user privilege and then impersonate user Bob instead of Hue. This will enable 
> any third party tool to impersonate alternate userid without having to 
> implement delegation token connection.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5155) Support secure proxy user access to HiveServer2

2013-10-04 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786926#comment-13786926
 ] 

Thejas M Nair commented on HIVE-5155:
-

This looks like a very valuable feature, but it is also a big one (new 
interfaces). I will not be able to finish reviewing it tonight. I will try to 
finish reviewing over the weekend. 
I think it is too late to include this major feature in hive 0.12. I have been 
including only important bug fixes in last few days to stabilize the release 
(as I earlier mentioned in the email to dev list) . I am sorry, I should have 
reviewed it earlier so that we had enough time. 


> Support secure proxy user access to HiveServer2
> ---
>
> Key: HIVE-5155
> URL: https://issues.apache.org/jira/browse/HIVE-5155
> Project: Hive
>  Issue Type: Improvement
>  Components: Authentication, HiveServer2, JDBC
>Affects Versions: 0.12.0
>Reporter: Prasad Mujumdar
>Assignee: Prasad Mujumdar
> Attachments: HIVE-5155-1-nothrift.patch, HIVE-5155.1.patch, 
> HIVE-5155.2.patch, HIVE-5155.3.patch, HIVE-5155-noThrift.2.patch, 
> HIVE-5155-noThrift.4.patch, ProxyAuth.jar, ProxyAuth.java, ProxyAuth.results
>
>
> The HiveServer2 can authenticate a client using via Kerberos and impersonate 
> the connecting user with underlying secure hadoop. This becomes a gateway for 
> a remote client to access secure hadoop cluster. Now this works fine for when 
> the client obtains Kerberos ticket and directly connects to HiveServer2. 
> There's another big use case for middleware tools where the end user wants to 
> access Hive via another server. For example Oozie action or Hue submitting 
> queries or a BI tool server accessing to HiveServer2. In these cases, the 
> third party server doesn't have end user's Kerberos credentials and hence it 
> can't submit queries to HiveServer2 on behalf of the end user.
> This ticket is for enabling proxy access to HiveServer2 for third party tools 
> on behalf of end users. There are two parts of the solution proposed in this 
> ticket:
> 1) Delegation token based connection for Oozie (OOZIE-1457)
> This is the common mechanism for Hadoop ecosystem components. Hive Remote 
> Metastore and HCatalog already support this. This is suitable for tool like 
> Oozie that submits the MR jobs as actions on behalf of its client. Oozie 
> already uses similar mechanism for Metastore/HCatalog access.
> 2) Direct proxy access for privileged hadoop users
> The delegation token implementation can be a challenge for non-hadoop 
> (especially non-java) components. This second part enables a privileged user 
> to directly specify an alternate session user during the connection. If the 
> connecting user has hadoop level privilege to impersonate the requested 
> userid, then HiveServer2 will run the session as that requested user. For 
> example, user Hue is allowed to impersonate user Bob (via core-site.xml proxy 
> user configuration). Then user Hue can connect to HiveServer2 and specify Bob 
> as session user via a session property. HiveServer2 will verify Hue's proxy 
> user privilege and then impersonate user Bob instead of Hue. This will enable 
> any third party tool to impersonate alternate userid without having to 
> implement delegation token connection.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5431) PassthroughOutputFormat SH changes causes IllegalArgumentException

2013-10-04 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5431:


Priority: Blocker  (was: Major)

> PassthroughOutputFormat SH changes causes IllegalArgumentException
> --
>
> Key: HIVE-5431
> URL: https://issues.apache.org/jira/browse/HIVE-5431
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
>Priority: Blocker
> Attachments: HIVE-5431.2.patch, HIVE-5431.patch
>
>
> The recent changes with HIVE-4331 introduced a new key 
> "hive.passthrough.storagehandler.of", whose value is set only on storage 
> handler writes, but obviously, will not be set on reads. However, 
> PlanUtils.configureJobPropertiesForStorageHandler winds up trying to set the 
> key for both cases into jobProperties, which cause any reads that are not 
> preceeded by writes to fail.
> Basically, if you have a .q in which you insert data into a hbase table and 
> then read it, it's okay. If you have a .q in which you only read data, it 
> throws an IllegalArgumentException, like so:
> {noformat}
> 2013-09-30 16:20:01,989 ERROR CliDriver (SessionState.java:printError(419)) - 
> Failed with exception java.io.IOException:java.lang.IllegalArgumentException: 
> Property value must not be null
> java.io.IOException: java.lang.IllegalArgumentException: Property value must 
> not be null
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:551)
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:489)
> at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:136)
> at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1471)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:271)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:348)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:446)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:456)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:737)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> Caused by: java.lang.IllegalArgumentException: Property value must not be null
> at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:810)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:792)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.copyTableJobPropertiesToConf(Utilities.java:1826)
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:380)
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:515)
> ... 17 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5402) StorageBasedAuthorizationProvider is not correctly able to determine that it is running from client-side

2013-10-04 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5402:


Priority: Blocker  (was: Major)

> StorageBasedAuthorizationProvider is not correctly able to determine that it 
> is running from client-side
> 
>
> Key: HIVE-5402
> URL: https://issues.apache.org/jira/browse/HIVE-5402
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
>Priority: Blocker
> Attachments: HIVE-5402.2.patch, HIVE-5402.patch
>
>
> HIVE-5048 tried to change the StorageBasedAuthorizationProvider (SBAP) so 
> that it could be run from the client side as well.
> However, there is a bug that causes SBAP to incorrectly conclude that it's 
> running from the metastore-side when it's actually running from the 
> client-side that causes it to throw a IllegalStateException claiming the 
> warehouse variable isn't set.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5391) make ORC predicate pushdown work with vectorization

2013-10-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786916#comment-13786916
 ] 

Hive QA commented on HIVE-5391:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12606852/HIVE-5391.03.patch

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 4053 tests executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1037/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1037/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

> make ORC predicate pushdown work with vectorization
> ---
>
> Key: HIVE-5391
> URL: https://issues.apache.org/jira/browse/HIVE-5391
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-5391.01.patch, HIVE-5391.01-vectorization.patch, 
> HIVE-5391.02.patch, HIVE-5391.03.patch, HIVE-5391.patch, 
> HIVE-5391-vectorization.patch
>
>
> Vectorized execution doesn't utilize ORC predicate pushdown. It should.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5453) jobsubmission2.conf should use 'timeout' property

2013-10-04 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-5453:
-

Attachment: HIVE-5453.patch

> jobsubmission2.conf should use 'timeout' property
> -
>
> Key: HIVE-5453
> URL: https://issues.apache.org/jira/browse/HIVE-5453
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Affects Versions: 0.12.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 0.12.0
>
> Attachments: HIVE-5453.patch
>
>
> TestDriverCurl.pm used to support timeout_seconds, which got renamed to 
> 'timeout'.  This makes TestHeartbeat test fail



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5453) jobsubmission2.conf should use 'timeout' property

2013-10-04 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-5453:
-

Status: Patch Available  (was: Open)

> jobsubmission2.conf should use 'timeout' property
> -
>
> Key: HIVE-5453
> URL: https://issues.apache.org/jira/browse/HIVE-5453
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Affects Versions: 0.12.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Fix For: 0.12.0
>
> Attachments: HIVE-5453.patch
>
>
> TestDriverCurl.pm used to support timeout_seconds, which got renamed to 
> 'timeout'.  This makes TestHeartbeat test fail



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5453) jobsubmission2.conf should use 'timeout' property

2013-10-04 Thread Eugene Koifman (JIRA)
Eugene Koifman created HIVE-5453:


 Summary: jobsubmission2.conf should use 'timeout' property
 Key: HIVE-5453
 URL: https://issues.apache.org/jira/browse/HIVE-5453
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Affects Versions: 0.12.0
Reporter: Eugene Koifman
Assignee: Eugene Koifman
 Fix For: 0.12.0


TestDriverCurl.pm used to support timeout_seconds, which got renamed to 
'timeout'.  This makes TestHeartbeat test fail



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5334) Milestone 3: Some tests pass under maven

2013-10-04 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5334:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thank you very much Edward!

> Milestone 3: Some tests pass under maven
> 
>
> Key: HIVE-5334
> URL: https://issues.apache.org/jira/browse/HIVE-5334
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-5334.patch, HIVE-5334.patch
>
>
> This milestone is that some tests pass and therefore we have the basic unit 
> test environment setup. We'll hunt down the rest of the failing tests in 
> future jiras.
> NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: Review Request 14490: HIVE-5372: Refactor TypeInfo and PrimitiveTypeEntry class hierachy to eliminate info repetition

2013-10-04 Thread Jason Dere

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14490/#review26707
---


Nice, this looks much cleaner. I'll try to take another look later but overall 
this looks good changes.



serde/src/java/org/apache/hadoop/hive/serde2/objectinspector/primitive/PrimitiveObjectInspectorFactory.java


Here you might have to pass in the TypeInfo. Test out stuff like cast('abc' 
as varchar(10)) - if it's not done right then the TypeInfo for that expression 
shows up as varchar(3).



serde/src/java/org/apache/hadoop/hive/serde2/typeinfo/VarcharUtils.java


Maybe this can be combined with ParameterizedPrimitiveTypeUtils, or those 
methods can be moved here since they are all varchar-specific now.


- Jason Dere


On Oct. 4, 2013, 2:07 p.m., Xuefu Zhang wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/14490/
> ---
> 
> (Updated Oct. 4, 2013, 2:07 p.m.)
> 
> 
> Review request for hive and Ashutosh Chauhan.
> 
> 
> Bugs: HIVE-5372
> https://issues.apache.org/jira/browse/HIVE-5372
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> See JIRA comment 
> https://issues.apache.org/jira/browse/HIVE-5372?focusedCommentId=13785506&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13785506
> 
> 
> Diffs
> -
> 
>   
> contrib/src/java/org/apache/hadoop/hive/contrib/util/typedbytes/TypedBytesRecordReader.java
>  8fcb3b3 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/FunctionRegistry.java f8d1483 
>   ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcStruct.java eb10360 
>   ql/src/java/org/apache/hadoop/hive/ql/io/orc/WriterImpl.java 628efab 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java 
> 36034d6 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/ParseUtils.java c8c5f63 
>   ql/src/java/org/apache/hadoop/hive/ql/parse/TypeCheckProcFactory.java 
> af51072 
>   ql/src/java/org/apache/hadoop/hive/ql/plan/ExprNodeConstantDesc.java 
> 6538add 
>   ql/src/java/org/apache/hadoop/hive/ql/udf/SettableUDF.java 9225aa1 
>   ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFConcat.java 
> 0ce1825 
>   ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFLower.java 
> 366d9e6 
>   ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFReflect2.java 
> 5ba2ec5 
>   ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFToVarchar.java 
> 509a392 
>   ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFUpper.java 
> 1bb164a 
>   ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDFUtils.java 
> 6815195 
>   serde/src/java/org/apache/hadoop/hive/serde2/RegexSerDe.java 5de5bd5 
>   
> serde/src/java/org/apache/hadoop/hive/serde2/binarysortable/BinarySortableSerDe.java
>  a206023 
>   serde/src/java/org/apache/hadoop/hive/serde2/dynamic_type/DynamicSerDe.java 
> ac81ab8 
>   serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyFactory.java 67f032c 
>   serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyHiveVarchar.java 
> 1286cba 
>   serde/src/java/org/apache/hadoop/hive/serde2/lazy/LazyUtils.java 214a3e7 
>   
> serde/src/java/org/apache/hadoop/hive/serde2/lazy/objectinspector/primitive/AbstractPrimitiveLazyObjectInspector.java
>  29c8528 
>   
> serde/src/java/org/apache/hadoop/hive/serde2/lazy/objectinspector/primitive/LazyBinaryObjectInspector.java
>  dbd60f7 
>   
> serde/src/java/org/apache/hadoop/hive/serde2/lazy/objectinspector/primitive/LazyBooleanObjectInspector.java
>  954f1d9 
>   
> serde/src/java/org/apache/hadoop/hive/serde2/lazy/objectinspector/primitive/LazyByteObjectInspector.java
>  57c5169 
>   
> serde/src/java/org/apache/hadoop/hive/serde2/lazy/objectinspector/primitive/LazyDateObjectInspector.java
>  679e5ea 
>   
> serde/src/java/org/apache/hadoop/hive/serde2/lazy/objectinspector/primitive/LazyDoubleObjectInspector.java
>  675333a 
>   
> serde/src/java/org/apache/hadoop/hive/serde2/lazy/objectinspector/primitive/LazyFloatObjectInspector.java
>  648b629 
>   
> serde/src/java/org/apache/hadoop/hive/serde2/lazy/objectinspector/primitive/LazyHiveDecimalObjectInspector.java
>  564a1aa 
>   
> serde/src/java/org/apache/hadoop/hive/serde2/lazy/objectinspector/primitive/LazyHiveVarcharObjectInspector.java
>  e827e09 
>   
> serde/src/java/org/apache/hadoop/hive/serde2/lazy/objectinspector/primitive/LazyIntObjectInspector.java
>  81f6f05 
>   
> serde/src/java/org/apache/hadoop/hive/serde2/lazy/objectinspector/primitive/LazyLongObjectInspector.java
>  9455fbf 
>   
> serde/src/java/org/apache/hadoop/hive/serde2/lazy/objectinspector/primitive/LazyPrimitiveObjectInspectorFactory.java
>  e28

[jira] [Commented] (HIVE-5448) webhcat duplicate test TestMapReduce_2 should be removed

2013-10-04 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786866#comment-13786866
 ] 

Eugene Koifman commented on HIVE-5448:
--

+1

> webhcat duplicate test TestMapReduce_2 should be removed
> 
>
> Key: HIVE-5448
> URL: https://issues.apache.org/jira/browse/HIVE-5448
> Project: Hive
>  Issue Type: Bug
>  Components: Tests, WebHCat
>Affects Versions: 0.12.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Attachments: HIVE-5448.1.patch
>
>
> TestMapReduce_2 in jobsubmission.conf should be removed, as it is a duplicate 
> of TestHeartbeat_2 in jobsubmission2.conf
> NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5446) Hive can CREATE an external table but not SELECT from it when file path have spaces

2013-10-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786865#comment-13786865
 ] 

Hive QA commented on HIVE-5446:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12606828/HIVE-5446.1.patch

{color:green}SUCCESS:{color} +1 4052 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1036/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1036/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

> Hive can CREATE an external table but not SELECT from it when file path have 
> spaces
> ---
>
> Key: HIVE-5446
> URL: https://issues.apache.org/jira/browse/HIVE-5446
> Project: Hive
>  Issue Type: Bug
>Reporter: Shuaishuai Nie
>Assignee: Shuaishuai Nie
> Attachments: HIVE-5446.1.patch
>
>
> Create external table table1 (age int, 
> gender string, totBil float, 
> dirBill float, alkphos int,
> sgpt int, sgot int, totProt float, 
> aLB float, aG float, sel int) 
> ROW FORMAT DELIMITED
> FIELDS TERMINATED BY ','
> STORED AS TEXTFILE
> LOCATION 'hdfs://namenodehost:9000/hive newtable';
> select * from table1;
> return nothing even there is file in the target folder



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5235) Infinite loop with ORC file and Hive 0.11

2013-10-04 Thread Prasanth J (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786856#comment-13786856
 ] 

Prasanth J commented on HIVE-5235:
--

Hi Pere

I am working with Owen to help resolving this issue.
We tried to generate around 10s of columns with random integers of different 
ranges. Generated around 25M rows and created ORC file with hive 0.11 version. 
The ORC file had multiple stripes to mimic similar scenario. we tried multiple 
runs of random dataset (also tried with JDK 7) but wasn't able to reproduce the 
issue. We tried it on Mac OSX but haven't tried it on Gentoo yet. Have you 
tried this on any other OS than Gentoo? If so, are you facing similar issues 
with other OSes? Also have you tried this on JDK 6? Will it be possible to 
post/send only the segment of integer values from last column and between the 
rows that Owen specified just to make sure there aren't some weird pattern in 
the input dataset?

> Infinite loop with ORC file and Hive 0.11
> -
>
> Key: HIVE-5235
> URL: https://issues.apache.org/jira/browse/HIVE-5235
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.11.0
> Environment: Gentoo linux with Hortonworks Hadoop 
> hadoop-1.1.2.23.tar.gz and Apache Hive 0.11d
>Reporter: Iván de Prado
>Priority: Blocker
>
> We are using Hive 0.11 with ORC file format and we get some tasks blocked in 
> some kind of infinite loop. They keep working indefinitely when we set a huge 
> task expiry timeout. If we the expiry time to 600 second, the taks fail 
> because of not reporting progress, and finally, the Job fails. 
> That is not consistent, and some times between jobs executions the behavior 
> changes. It happen for different queries.
> We are using Hive 0.11 with Hadoop hadoop-1.1.2.23 from Hortonworks. The taks 
> that is blocked keeps consuming 100% of CPU usage, and the stack trace is 
> always the same consistently. Everything points to some kind of infinite 
> loop. My guessing is that it has some relation to the ORC file. Maybe some 
> pointer is not right when writing generating some kind of infinite loop when 
> reading.  Or maybe there is a bug in the reading stage.
> More information below. The stack trace:
> {noformat} 
> "main" prio=10 tid=0x7f20a000a800 nid=0x1ed2 runnable [0x7f20a8136000]
>java.lang.Thread.State: RUNNABLE
>   at java.util.zip.Inflater.inflateBytes(Native Method)
>   at java.util.zip.Inflater.inflate(Inflater.java:256)
>   - locked <0xf42a6ca0> (a java.util.zip.ZStreamRef)
>   at 
> org.apache.hadoop.hive.ql.io.orc.ZlibCodec.decompress(ZlibCodec.java:64)
>   at 
> org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.readHeader(InStream.java:128)
>   at 
> org.apache.hadoop.hive.ql.io.orc.InStream$CompressedStream.read(InStream.java:143)
>   at 
> org.apache.hadoop.hive.ql.io.orc.SerializationUtils.readVulong(SerializationUtils.java:54)
>   at 
> org.apache.hadoop.hive.ql.io.orc.SerializationUtils.readVslong(SerializationUtils.java:65)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReader.readValues(RunLengthIntegerReader.java:66)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RunLengthIntegerReader.next(RunLengthIntegerReader.java:81)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$IntTreeReader.next(RecordReaderImpl.java:332)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl$StructTreeReader.next(RecordReaderImpl.java:802)
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.next(RecordReaderImpl.java:1214)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:71)
>   at 
> org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$OrcRecordReader.next(OrcInputFormat.java:46)
>   at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:274)
>   at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:101)
>   at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41)
>   at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:108)
>   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:300)
>   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:218)
>   at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:236)
>   - eliminated <0xe1459700> (a 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader)
>   at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:216)
>   - locked <0x0

[jira] [Commented] (HIVE-5452) HCatalog e2e test Pig_HBase_1 and Pig_HBase_2 are failing with ClassCastException

2013-10-04 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786838#comment-13786838
 ] 

Eugene Koifman commented on HIVE-5452:
--

HIVE-5274 explains why this is needed

+1

> HCatalog e2e test Pig_HBase_1 and Pig_HBase_2 are failing with 
> ClassCastException
> -
>
> Key: HIVE-5452
> URL: https://issues.apache.org/jira/browse/HIVE-5452
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.12.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
> Attachments: BUG-5452.patch
>
>
> HCatalog e2e test Pig_HBase_1 tries to read data from a table it created 
> using the org.apache.hcatalog.hbase.HBaseHCatStorageHandler using the hcat 
> loader org.apache.hive.hcatalog.pig.HCatLoader(). Following is the pig script.
> {code}
> a = load 'pig_hbase_1' using org.apache.hive.hcatalog.pig.HCatLoader(); store 
> a into '/user/hcat/out/root-1380933875-pig.conf/Pig_HBase_1_0_benchmark.out';
> {code}
> Following error is thrown in the log:
> {noformat}
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
>  ERROR 2017: Internal error creating job configuration.
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:850)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:296)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:190)
> at org.apache.pig.PigServer.launchPlan(PigServer.java:1322)
> at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1307)
> at org.apache.pig.PigServer.execute(PigServer.java:1297)
> at org.apache.pig.PigServer.executeBatch(PigServer.java:375)
> at org.apache.pig.PigServer.executeBatch(PigServer.java:353)
> at 
> org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:140)
> at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:202)
> at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
> at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
> at org.apache.pig.Main.run(Main.java:607)
> at org.apache.pig.Main.main(Main.java:156)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> Caused by: java.io.IOException: java.lang.ClassCastException: 
> org.apache.hive.hcatalog.mapreduce.InputJobInfo cannot be cast to 
> org.apache.hcatalog.mapreduce.InputJobInfo
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:87)
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:63)
> at 
> org.apache.hive.hcatalog.pig.HCatLoader.setLocation(HCatLoader.java:119)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:475)
> ... 18 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hive.hcatalog.mapreduce.InputJobInfo cannot be cast to 
> org.apache.hcatalog.mapreduce.InputJobInfo
> at 
> org.apache.hcatalog.hbase.HBaseHCatStorageHandler.configureInputJobProperties(HBaseHCatStorageHandler.java:106)
> at 
> org.apache.hive.hcatalog.common.HCatUtil.getInputJobProperties(HCatUtil.java:466)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.extractPartInfo(InitializeInput.java:161)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:137)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:86)
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:85)
> ... 21 more
> {noformat}
> The pig script should instead be using the 
> org.apache.hcatalog.pig.HCatLoader() instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5452) HCatalog e2e test Pig_HBase_1 and Pig_HBase_2 are failing with ClassCastException

2013-10-04 Thread Deepesh Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepesh Khandelwal updated HIVE-5452:
-

Description: 
HCatalog e2e test Pig_HBase_1 tries to read data from a table it created using 
the org.apache.hcatalog.hbase.HBaseHCatStorageHandler using the hcat loader 
org.apache.hive.hcatalog.pig.HCatLoader(). Following is the pig script.
{code}
a = load 'pig_hbase_1' using org.apache.hive.hcatalog.pig.HCatLoader(); store a 
into '/user/hcat/out/root-1380933875-pig.conf/Pig_HBase_1_0_benchmark.out';
{code}
Following error is thrown in the log:
{noformat}
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
 ERROR 2017: Internal error creating job configuration.
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:850)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:296)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:190)
at org.apache.pig.PigServer.launchPlan(PigServer.java:1322)
at 
org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1307)
at org.apache.pig.PigServer.execute(PigServer.java:1297)
at org.apache.pig.PigServer.executeBatch(PigServer.java:375)
at org.apache.pig.PigServer.executeBatch(PigServer.java:353)
at 
org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:140)
at 
org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:202)
at 
org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
at org.apache.pig.Main.run(Main.java:607)
at org.apache.pig.Main.main(Main.java:156)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: java.io.IOException: java.lang.ClassCastException: 
org.apache.hive.hcatalog.mapreduce.InputJobInfo cannot be cast to 
org.apache.hcatalog.mapreduce.InputJobInfo
at 
org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:87)
at 
org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:63)
at 
org.apache.hive.hcatalog.pig.HCatLoader.setLocation(HCatLoader.java:119)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:475)
... 18 more
Caused by: java.lang.ClassCastException: 
org.apache.hive.hcatalog.mapreduce.InputJobInfo cannot be cast to 
org.apache.hcatalog.mapreduce.InputJobInfo
at 
org.apache.hcatalog.hbase.HBaseHCatStorageHandler.configureInputJobProperties(HBaseHCatStorageHandler.java:106)
at 
org.apache.hive.hcatalog.common.HCatUtil.getInputJobProperties(HCatUtil.java:466)
at 
org.apache.hive.hcatalog.mapreduce.InitializeInput.extractPartInfo(InitializeInput.java:161)
at 
org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:137)
at 
org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:86)
at 
org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:85)
... 21 more
{noformat}
The pig script should instead be using the org.apache.hcatalog.pig.HCatLoader() 
instead.

  was:
WebHCat e2e test Pig_HBase_1 tries to read data from a table it created using 
the org.apache.hcatalog.hbase.HBaseHCatStorageHandler using the hcat loader 
org.apache.hive.hcatalog.pig.HCatLoader(). Following is the pig script.
{code}
a = load 'pig_hbase_1' using org.apache.hive.hcatalog.pig.HCatLoader(); store a 
into '/user/hcat/out/root-1380933875-pig.conf/Pig_HBase_1_0_benchmark.out';
{code}
Following error is thrown in the log:
{noformat}
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
 ERROR 2017: Internal error creating job configuration.
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:850)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:296)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:190)
at org.apache.pig.PigServer.launchPlan(PigServer.java:1322)
at 
org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1307)
 

[jira] [Updated] (HIVE-5452) HCatalog e2e test Pig_HBase_1 and Pig_HBase_2 are failing with ClassCastException

2013-10-04 Thread Deepesh Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepesh Khandelwal updated HIVE-5452:
-

Component/s: (was: WebHCat)
 HCatalog

> HCatalog e2e test Pig_HBase_1 and Pig_HBase_2 are failing with 
> ClassCastException
> -
>
> Key: HIVE-5452
> URL: https://issues.apache.org/jira/browse/HIVE-5452
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.12.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
> Attachments: BUG-5452.patch
>
>
> WebHCat e2e test Pig_HBase_1 tries to read data from a table it created using 
> the org.apache.hcatalog.hbase.HBaseHCatStorageHandler using the hcat loader 
> org.apache.hive.hcatalog.pig.HCatLoader(). Following is the pig script.
> {code}
> a = load 'pig_hbase_1' using org.apache.hive.hcatalog.pig.HCatLoader(); store 
> a into '/user/hcat/out/root-1380933875-pig.conf/Pig_HBase_1_0_benchmark.out';
> {code}
> Following error is thrown in the log:
> {noformat}
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
>  ERROR 2017: Internal error creating job configuration.
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:850)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:296)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:190)
> at org.apache.pig.PigServer.launchPlan(PigServer.java:1322)
> at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1307)
> at org.apache.pig.PigServer.execute(PigServer.java:1297)
> at org.apache.pig.PigServer.executeBatch(PigServer.java:375)
> at org.apache.pig.PigServer.executeBatch(PigServer.java:353)
> at 
> org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:140)
> at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:202)
> at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
> at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
> at org.apache.pig.Main.run(Main.java:607)
> at org.apache.pig.Main.main(Main.java:156)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> Caused by: java.io.IOException: java.lang.ClassCastException: 
> org.apache.hive.hcatalog.mapreduce.InputJobInfo cannot be cast to 
> org.apache.hcatalog.mapreduce.InputJobInfo
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:87)
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:63)
> at 
> org.apache.hive.hcatalog.pig.HCatLoader.setLocation(HCatLoader.java:119)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:475)
> ... 18 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hive.hcatalog.mapreduce.InputJobInfo cannot be cast to 
> org.apache.hcatalog.mapreduce.InputJobInfo
> at 
> org.apache.hcatalog.hbase.HBaseHCatStorageHandler.configureInputJobProperties(HBaseHCatStorageHandler.java:106)
> at 
> org.apache.hive.hcatalog.common.HCatUtil.getInputJobProperties(HCatUtil.java:466)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.extractPartInfo(InitializeInput.java:161)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:137)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:86)
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:85)
> ... 21 more
> {noformat}
> The pig script should instead be using the 
> org.apache.hcatalog.pig.HCatLoader() instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5452) HCatalog e2e test Pig_HBase_1 and Pig_HBase_2 are failing with ClassCastException

2013-10-04 Thread Deepesh Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepesh Khandelwal updated HIVE-5452:
-

Summary: HCatalog e2e test Pig_HBase_1 and Pig_HBase_2 are failing with 
ClassCastException  (was: WebHCat e2e test Pig_HBase_1 and Pig_HBase_2 are 
failing with ClassCastException)

> HCatalog e2e test Pig_HBase_1 and Pig_HBase_2 are failing with 
> ClassCastException
> -
>
> Key: HIVE-5452
> URL: https://issues.apache.org/jira/browse/HIVE-5452
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 0.12.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
> Attachments: BUG-5452.patch
>
>
> WebHCat e2e test Pig_HBase_1 tries to read data from a table it created using 
> the org.apache.hcatalog.hbase.HBaseHCatStorageHandler using the hcat loader 
> org.apache.hive.hcatalog.pig.HCatLoader(). Following is the pig script.
> {code}
> a = load 'pig_hbase_1' using org.apache.hive.hcatalog.pig.HCatLoader(); store 
> a into '/user/hcat/out/root-1380933875-pig.conf/Pig_HBase_1_0_benchmark.out';
> {code}
> Following error is thrown in the log:
> {noformat}
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
>  ERROR 2017: Internal error creating job configuration.
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:850)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:296)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:190)
> at org.apache.pig.PigServer.launchPlan(PigServer.java:1322)
> at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1307)
> at org.apache.pig.PigServer.execute(PigServer.java:1297)
> at org.apache.pig.PigServer.executeBatch(PigServer.java:375)
> at org.apache.pig.PigServer.executeBatch(PigServer.java:353)
> at 
> org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:140)
> at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:202)
> at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
> at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
> at org.apache.pig.Main.run(Main.java:607)
> at org.apache.pig.Main.main(Main.java:156)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> Caused by: java.io.IOException: java.lang.ClassCastException: 
> org.apache.hive.hcatalog.mapreduce.InputJobInfo cannot be cast to 
> org.apache.hcatalog.mapreduce.InputJobInfo
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:87)
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:63)
> at 
> org.apache.hive.hcatalog.pig.HCatLoader.setLocation(HCatLoader.java:119)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:475)
> ... 18 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hive.hcatalog.mapreduce.InputJobInfo cannot be cast to 
> org.apache.hcatalog.mapreduce.InputJobInfo
> at 
> org.apache.hcatalog.hbase.HBaseHCatStorageHandler.configureInputJobProperties(HBaseHCatStorageHandler.java:106)
> at 
> org.apache.hive.hcatalog.common.HCatUtil.getInputJobProperties(HCatUtil.java:466)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.extractPartInfo(InitializeInput.java:161)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:137)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:86)
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:85)
> ... 21 more
> {noformat}
> The pig script should instead be using the 
> org.apache.hcatalog.pig.HCatLoader() instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5452) WebHCat e2e test Pig_HBase_1 and Pig_HBase_2 are failing with ClassCastException

2013-10-04 Thread Deepesh Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepesh Khandelwal updated HIVE-5452:
-

Attachment: BUG-5452.patch

Attached is the patch fixing the issue.

> WebHCat e2e test Pig_HBase_1 and Pig_HBase_2 are failing with 
> ClassCastException
> 
>
> Key: HIVE-5452
> URL: https://issues.apache.org/jira/browse/HIVE-5452
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Affects Versions: 0.12.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
> Attachments: BUG-5452.patch
>
>
> WebHCat e2e test Pig_HBase_1 tries to read data from a table it created using 
> the org.apache.hcatalog.hbase.HBaseHCatStorageHandler using the hcat loader 
> org.apache.hive.hcatalog.pig.HCatLoader(). Following is the pig script.
> {code}
> a = load 'pig_hbase_1' using org.apache.hive.hcatalog.pig.HCatLoader(); store 
> a into '/user/hcat/out/root-1380933875-pig.conf/Pig_HBase_1_0_benchmark.out';
> {code}
> Following error is thrown in the log:
> {noformat}
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
>  ERROR 2017: Internal error creating job configuration.
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:850)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:296)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:190)
> at org.apache.pig.PigServer.launchPlan(PigServer.java:1322)
> at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1307)
> at org.apache.pig.PigServer.execute(PigServer.java:1297)
> at org.apache.pig.PigServer.executeBatch(PigServer.java:375)
> at org.apache.pig.PigServer.executeBatch(PigServer.java:353)
> at 
> org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:140)
> at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:202)
> at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
> at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
> at org.apache.pig.Main.run(Main.java:607)
> at org.apache.pig.Main.main(Main.java:156)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> Caused by: java.io.IOException: java.lang.ClassCastException: 
> org.apache.hive.hcatalog.mapreduce.InputJobInfo cannot be cast to 
> org.apache.hcatalog.mapreduce.InputJobInfo
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:87)
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:63)
> at 
> org.apache.hive.hcatalog.pig.HCatLoader.setLocation(HCatLoader.java:119)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:475)
> ... 18 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hive.hcatalog.mapreduce.InputJobInfo cannot be cast to 
> org.apache.hcatalog.mapreduce.InputJobInfo
> at 
> org.apache.hcatalog.hbase.HBaseHCatStorageHandler.configureInputJobProperties(HBaseHCatStorageHandler.java:106)
> at 
> org.apache.hive.hcatalog.common.HCatUtil.getInputJobProperties(HCatUtil.java:466)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.extractPartInfo(InitializeInput.java:161)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:137)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:86)
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:85)
> ... 21 more
> {noformat}
> The pig script should instead be using the 
> org.apache.hcatalog.pig.HCatLoader() instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5452) WebHCat e2e test Pig_HBase_1 and Pig_HBase_2 are failing with ClassCastException

2013-10-04 Thread Deepesh Khandelwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepesh Khandelwal updated HIVE-5452:
-

Affects Version/s: 0.12.0
   Status: Patch Available  (was: Open)

> WebHCat e2e test Pig_HBase_1 and Pig_HBase_2 are failing with 
> ClassCastException
> 
>
> Key: HIVE-5452
> URL: https://issues.apache.org/jira/browse/HIVE-5452
> Project: Hive
>  Issue Type: Bug
>  Components: WebHCat
>Affects Versions: 0.12.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
> Attachments: BUG-5452.patch
>
>
> WebHCat e2e test Pig_HBase_1 tries to read data from a table it created using 
> the org.apache.hcatalog.hbase.HBaseHCatStorageHandler using the hcat loader 
> org.apache.hive.hcatalog.pig.HCatLoader(). Following is the pig script.
> {code}
> a = load 'pig_hbase_1' using org.apache.hive.hcatalog.pig.HCatLoader(); store 
> a into '/user/hcat/out/root-1380933875-pig.conf/Pig_HBase_1_0_benchmark.out';
> {code}
> Following error is thrown in the log:
> {noformat}
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
>  ERROR 2017: Internal error creating job configuration.
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:850)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:296)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:190)
> at org.apache.pig.PigServer.launchPlan(PigServer.java:1322)
> at 
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1307)
> at org.apache.pig.PigServer.execute(PigServer.java:1297)
> at org.apache.pig.PigServer.executeBatch(PigServer.java:375)
> at org.apache.pig.PigServer.executeBatch(PigServer.java:353)
> at 
> org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:140)
> at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:202)
> at 
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
> at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
> at org.apache.pig.Main.run(Main.java:607)
> at org.apache.pig.Main.main(Main.java:156)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> Caused by: java.io.IOException: java.lang.ClassCastException: 
> org.apache.hive.hcatalog.mapreduce.InputJobInfo cannot be cast to 
> org.apache.hcatalog.mapreduce.InputJobInfo
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:87)
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:63)
> at 
> org.apache.hive.hcatalog.pig.HCatLoader.setLocation(HCatLoader.java:119)
> at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:475)
> ... 18 more
> Caused by: java.lang.ClassCastException: 
> org.apache.hive.hcatalog.mapreduce.InputJobInfo cannot be cast to 
> org.apache.hcatalog.mapreduce.InputJobInfo
> at 
> org.apache.hcatalog.hbase.HBaseHCatStorageHandler.configureInputJobProperties(HBaseHCatStorageHandler.java:106)
> at 
> org.apache.hive.hcatalog.common.HCatUtil.getInputJobProperties(HCatUtil.java:466)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.extractPartInfo(InitializeInput.java:161)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:137)
> at 
> org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:86)
> at 
> org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:85)
> ... 21 more
> {noformat}
> The pig script should instead be using the 
> org.apache.hcatalog.pig.HCatLoader() instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5452) WebHCat e2e test Pig_HBase_1 and Pig_HBase_2 are failing with ClassCastException

2013-10-04 Thread Deepesh Khandelwal (JIRA)
Deepesh Khandelwal created HIVE-5452:


 Summary: WebHCat e2e test Pig_HBase_1 and Pig_HBase_2 are failing 
with ClassCastException
 Key: HIVE-5452
 URL: https://issues.apache.org/jira/browse/HIVE-5452
 Project: Hive
  Issue Type: Bug
  Components: WebHCat
Reporter: Deepesh Khandelwal
Assignee: Deepesh Khandelwal


WebHCat e2e test Pig_HBase_1 tries to read data from a table it created using 
the org.apache.hcatalog.hbase.HBaseHCatStorageHandler using the hcat loader 
org.apache.hive.hcatalog.pig.HCatLoader(). Following is the pig script.
{code}
a = load 'pig_hbase_1' using org.apache.hive.hcatalog.pig.HCatLoader(); store a 
into '/user/hcat/out/root-1380933875-pig.conf/Pig_HBase_1_0_benchmark.out';
{code}
Following error is thrown in the log:
{noformat}
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
 ERROR 2017: Internal error creating job configuration.
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:850)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:296)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:190)
at org.apache.pig.PigServer.launchPlan(PigServer.java:1322)
at 
org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1307)
at org.apache.pig.PigServer.execute(PigServer.java:1297)
at org.apache.pig.PigServer.executeBatch(PigServer.java:375)
at org.apache.pig.PigServer.executeBatch(PigServer.java:353)
at 
org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:140)
at 
org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:202)
at 
org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
at org.apache.pig.Main.run(Main.java:607)
at org.apache.pig.Main.main(Main.java:156)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: java.io.IOException: java.lang.ClassCastException: 
org.apache.hive.hcatalog.mapreduce.InputJobInfo cannot be cast to 
org.apache.hcatalog.mapreduce.InputJobInfo
at 
org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:87)
at 
org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:63)
at 
org.apache.hive.hcatalog.pig.HCatLoader.setLocation(HCatLoader.java:119)
at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:475)
... 18 more
Caused by: java.lang.ClassCastException: 
org.apache.hive.hcatalog.mapreduce.InputJobInfo cannot be cast to 
org.apache.hcatalog.mapreduce.InputJobInfo
at 
org.apache.hcatalog.hbase.HBaseHCatStorageHandler.configureInputJobProperties(HBaseHCatStorageHandler.java:106)
at 
org.apache.hive.hcatalog.common.HCatUtil.getInputJobProperties(HCatUtil.java:466)
at 
org.apache.hive.hcatalog.mapreduce.InitializeInput.extractPartInfo(InitializeInput.java:161)
at 
org.apache.hive.hcatalog.mapreduce.InitializeInput.getInputJobInfo(InitializeInput.java:137)
at 
org.apache.hive.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:86)
at 
org.apache.hive.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:85)
... 21 more
{noformat}
The pig script should instead be using the org.apache.hcatalog.pig.HCatLoader() 
instead.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5402) StorageBasedAuthorizationProvider is not correctly able to determine that it is running from client-side

2013-10-04 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786824#comment-13786824
 ] 

Sushanth Sowmyan commented on HIVE-5402:


Thanks, Ashutosh. :)

And yes, I'd agree with that. That's another thing on my long-term mental bank 
that I need to prioritize and push for.

> StorageBasedAuthorizationProvider is not correctly able to determine that it 
> is running from client-side
> 
>
> Key: HIVE-5402
> URL: https://issues.apache.org/jira/browse/HIVE-5402
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-5402.2.patch, HIVE-5402.patch
>
>
> HIVE-5048 tried to change the StorageBasedAuthorizationProvider (SBAP) so 
> that it could be run from the client side as well.
> However, there is a bug that causes SBAP to incorrectly conclude that it's 
> running from the metastore-side when it's actually running from the 
> client-side that causes it to throw a IllegalStateException claiming the 
> warehouse variable isn't set.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5402) StorageBasedAuthorizationProvider is not correctly able to determine that it is running from client-side

2013-10-04 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786820#comment-13786820
 ] 

Ashutosh Chauhan commented on HIVE-5402:


Configuration Vs HiveConf is important though of lesser concern to me. But I do 
feel strongly about two independent mechanisms in ql and metastore with 
HiveProxy as intermediary. Redesign there is certainly warranted, but clearly 
out of scope for this jira. Lets take up that one in separate jira.
+1 for this.  

> StorageBasedAuthorizationProvider is not correctly able to determine that it 
> is running from client-side
> 
>
> Key: HIVE-5402
> URL: https://issues.apache.org/jira/browse/HIVE-5402
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-5402.2.patch, HIVE-5402.patch
>
>
> HIVE-5048 tried to change the StorageBasedAuthorizationProvider (SBAP) so 
> that it could be run from the client side as well.
> However, there is a bug that causes SBAP to incorrectly conclude that it's 
> running from the metastore-side when it's actually running from the 
> client-side that causes it to throw a IllegalStateException claiming the 
> warehouse variable isn't set.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5431) PassthroughOutputFormat SH changes causes IllegalArgumentException

2013-10-04 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786813#comment-13786813
 ] 

Ashutosh Chauhan commented on HIVE-5431:


+1 

> PassthroughOutputFormat SH changes causes IllegalArgumentException
> --
>
> Key: HIVE-5431
> URL: https://issues.apache.org/jira/browse/HIVE-5431
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-5431.2.patch, HIVE-5431.patch
>
>
> The recent changes with HIVE-4331 introduced a new key 
> "hive.passthrough.storagehandler.of", whose value is set only on storage 
> handler writes, but obviously, will not be set on reads. However, 
> PlanUtils.configureJobPropertiesForStorageHandler winds up trying to set the 
> key for both cases into jobProperties, which cause any reads that are not 
> preceeded by writes to fail.
> Basically, if you have a .q in which you insert data into a hbase table and 
> then read it, it's okay. If you have a .q in which you only read data, it 
> throws an IllegalArgumentException, like so:
> {noformat}
> 2013-09-30 16:20:01,989 ERROR CliDriver (SessionState.java:printError(419)) - 
> Failed with exception java.io.IOException:java.lang.IllegalArgumentException: 
> Property value must not be null
> java.io.IOException: java.lang.IllegalArgumentException: Property value must 
> not be null
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:551)
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:489)
> at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:136)
> at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1471)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:271)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:348)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:446)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:456)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:737)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> Caused by: java.lang.IllegalArgumentException: Property value must not be null
> at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:810)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:792)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.copyTableJobPropertiesToConf(Utilities.java:1826)
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:380)
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:515)
> ... 17 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5372) Refactor TypeInfo and PrimitiveTypeEntry class hierachy to eliminate info repetition

2013-10-04 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786812#comment-13786812
 ] 

Xuefu Zhang commented on HIVE-5372:
---

Finally I got a full test result. Not bad. I think majority of the failed tests 
are due to result diff. I will address them in updated patch. However, I don't 
believe this will have much impact on the code changes. So, please continue 
your review. Thanks!

> Refactor TypeInfo and PrimitiveTypeEntry class hierachy to eliminate info 
> repetition
> 
>
> Key: HIVE-5372
> URL: https://issues.apache.org/jira/browse/HIVE-5372
> Project: Hive
>  Issue Type: Improvement
>  Components: Types
>Affects Versions: 0.12.0
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
> Fix For: 0.13.0
>
> Attachments: HIVE-5372.1.patch, HIVE-5372.patch
>
>
> TypeInfo with its sub-classes and PrimititiveTypeEntry class seem having 
> repetitive information, such as type names and type params. It will be good 
> if we can streamline the information organization.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5431) PassthroughOutputFormat SH changes causes IllegalArgumentException

2013-10-04 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-5431:
---

Status: Patch Available  (was: Open)

> PassthroughOutputFormat SH changes causes IllegalArgumentException
> --
>
> Key: HIVE-5431
> URL: https://issues.apache.org/jira/browse/HIVE-5431
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-5431.2.patch, HIVE-5431.patch
>
>
> The recent changes with HIVE-4331 introduced a new key 
> "hive.passthrough.storagehandler.of", whose value is set only on storage 
> handler writes, but obviously, will not be set on reads. However, 
> PlanUtils.configureJobPropertiesForStorageHandler winds up trying to set the 
> key for both cases into jobProperties, which cause any reads that are not 
> preceeded by writes to fail.
> Basically, if you have a .q in which you insert data into a hbase table and 
> then read it, it's okay. If you have a .q in which you only read data, it 
> throws an IllegalArgumentException, like so:
> {noformat}
> 2013-09-30 16:20:01,989 ERROR CliDriver (SessionState.java:printError(419)) - 
> Failed with exception java.io.IOException:java.lang.IllegalArgumentException: 
> Property value must not be null
> java.io.IOException: java.lang.IllegalArgumentException: Property value must 
> not be null
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:551)
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:489)
> at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:136)
> at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1471)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:271)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:348)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:446)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:456)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:737)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> Caused by: java.lang.IllegalArgumentException: Property value must not be null
> at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:810)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:792)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.copyTableJobPropertiesToConf(Utilities.java:1826)
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:380)
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:515)
> ... 17 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5431) PassthroughOutputFormat SH changes causes IllegalArgumentException

2013-10-04 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786793#comment-13786793
 ] 

Sushanth Sowmyan commented on HIVE-5431:


[~thejas] & [~ashutoshc] : Writing a test for this proves to be difficult 
because it can't be written as a .q test, and instead requires me to create a 
hbase table outside of hive and then read using hive in a job to demonstrate 
the bug. I've manually tested this bug and this patch.

Would you prefer to spin 0.12 RC0 without this patch, and then add it along 
with a testcase at RC1 timeframe, or would you prefer to take this patch now 
and open another jira for a test? I'm okay with either approach, and will 
assume the former and leave this open. If you prefer the latter, then please 
feel free to close/commit and open another jira for that. I'm going to go ahead 
and mark it as patch-available, though, so that the automated tests at least 
pick it up and run a full run with this patch.

> PassthroughOutputFormat SH changes causes IllegalArgumentException
> --
>
> Key: HIVE-5431
> URL: https://issues.apache.org/jira/browse/HIVE-5431
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-5431.2.patch, HIVE-5431.patch
>
>
> The recent changes with HIVE-4331 introduced a new key 
> "hive.passthrough.storagehandler.of", whose value is set only on storage 
> handler writes, but obviously, will not be set on reads. However, 
> PlanUtils.configureJobPropertiesForStorageHandler winds up trying to set the 
> key for both cases into jobProperties, which cause any reads that are not 
> preceeded by writes to fail.
> Basically, if you have a .q in which you insert data into a hbase table and 
> then read it, it's okay. If you have a .q in which you only read data, it 
> throws an IllegalArgumentException, like so:
> {noformat}
> 2013-09-30 16:20:01,989 ERROR CliDriver (SessionState.java:printError(419)) - 
> Failed with exception java.io.IOException:java.lang.IllegalArgumentException: 
> Property value must not be null
> java.io.IOException: java.lang.IllegalArgumentException: Property value must 
> not be null
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:551)
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:489)
> at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:136)
> at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1471)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:271)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:348)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:446)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:456)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:737)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> Caused by: java.lang.IllegalArgumentException: Property value must not be null
> at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:810)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:792)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.copyTableJobPropertiesToConf(Utilities.java:1826)
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:380)
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:515)
> ... 17 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5372) Refactor TypeInfo and PrimitiveTypeEntry class hierachy to eliminate info repetition

2013-10-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786786#comment-13786786
 ] 

Hive QA commented on HIVE-5372:
---



{color:red}Overall{color}: -1 at least one tests failed

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12606903/HIVE-5372.1.patch

{color:red}ERROR:{color} -1 due to 11 failed/errored test(s), 4024 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_type_conversions_1
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_varchar_nested_types
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_invalid_varchar_length_1
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_invalid_varchar_length_2
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_invalid_varchar_length_3
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_script_broken_pipe1
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_serde_regex
org.apache.hadoop.hive.ql.parse.TestParse.testParse_input9
org.apache.hadoop.hive.serde2.typeinfo.TestTypeInfoUtils.testVarcharNoParams
org.apache.hive.jdbc.TestJdbcDriver2.testDataTypes
org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1035/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1035/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests failed with: TestsFailedException: 11 tests failed
{noformat}

This message is automatically generated.

> Refactor TypeInfo and PrimitiveTypeEntry class hierachy to eliminate info 
> repetition
> 
>
> Key: HIVE-5372
> URL: https://issues.apache.org/jira/browse/HIVE-5372
> Project: Hive
>  Issue Type: Improvement
>  Components: Types
>Affects Versions: 0.12.0
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
> Fix For: 0.13.0
>
> Attachments: HIVE-5372.1.patch, HIVE-5372.patch
>
>
> TypeInfo with its sub-classes and PrimititiveTypeEntry class seem having 
> repetitive information, such as type names and type params. It will be good 
> if we can streamline the information organization.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5431) PassthroughOutputFormat SH changes causes IllegalArgumentException

2013-10-04 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-5431:
---

Attachment: HIVE-5431.2.patch

Updated patch moving around code for it to be cleaner.

> PassthroughOutputFormat SH changes causes IllegalArgumentException
> --
>
> Key: HIVE-5431
> URL: https://issues.apache.org/jira/browse/HIVE-5431
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-5431.2.patch, HIVE-5431.patch
>
>
> The recent changes with HIVE-4331 introduced a new key 
> "hive.passthrough.storagehandler.of", whose value is set only on storage 
> handler writes, but obviously, will not be set on reads. However, 
> PlanUtils.configureJobPropertiesForStorageHandler winds up trying to set the 
> key for both cases into jobProperties, which cause any reads that are not 
> preceeded by writes to fail.
> Basically, if you have a .q in which you insert data into a hbase table and 
> then read it, it's okay. If you have a .q in which you only read data, it 
> throws an IllegalArgumentException, like so:
> {noformat}
> 2013-09-30 16:20:01,989 ERROR CliDriver (SessionState.java:printError(419)) - 
> Failed with exception java.io.IOException:java.lang.IllegalArgumentException: 
> Property value must not be null
> java.io.IOException: java.lang.IllegalArgumentException: Property value must 
> not be null
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:551)
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:489)
> at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:136)
> at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:1471)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:271)
> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:413)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:348)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:446)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:456)
> at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:737)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675)
> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:614)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> Caused by: java.lang.IllegalArgumentException: Property value must not be null
> at 
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:88)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:810)
> at org.apache.hadoop.conf.Configuration.set(Configuration.java:792)
> at 
> org.apache.hadoop.hive.ql.exec.Utilities.copyTableJobPropertiesToConf(Utilities.java:1826)
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:380)
> at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:515)
> ... 17 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4542) TestJdbcDriver2.testMetaDataGetSchemas fails because of unexpected database

2013-10-04 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786772#comment-13786772
 ] 

Thejas M Nair commented on HIVE-4542:
-

Committed this test fix to 0.12.


> TestJdbcDriver2.testMetaDataGetSchemas fails because of unexpected database
> ---
>
> Key: HIVE-4542
> URL: https://issues.apache.org/jira/browse/HIVE-4542
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Thejas M Nair
>Assignee: Vaibhav Gumashta
> Fix For: 0.12.0
>
> Attachments: D13269.1.patch, D13269.2.patch, HIVE-4542.1.patch, 
> HIVE-4542.2.patch, HIVE-4542.3.patch
>
>
> The check for database name in TestJdbcDriver2.testMetaDataGetSchemas fails 
> with the error -
> {code}
> junit.framework.ComparisonFailure: expected:<...efault> but was:<...bname>
> {code}
> ie, a database called dbname is found, which it does not expect. This failure 
> will happen depending on the order in which the function get the databases, 
> if "default" database is the first one, it succeeds.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5402) StorageBasedAuthorizationProvider is not correctly able to determine that it is running from client-side

2013-10-04 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786774#comment-13786774
 ] 

Sushanth Sowmyan commented on HIVE-5402:


We do have an issue on how to do authorization, and we need to know whether 
we're called from ql, in which case we have a Hive or if we're called from the 
metastore, in which case we do not have a Hive, but we do have a 
HiveMetaStoreHandler. There is a class called HiveProxy that tries to 
common-alize this behaviour, but to instantiate it, we need to know whether 
we're being instantiated from the Metastore or ql. We could solve this by 
having two separate classes, and the original intent of SBAP was to work from 
the metastore, but that is unnecessary duplication.

The one other change that I could consider from your request is that for client 
side auth also, we would run through a local metastore, and thus, we could do 
the authorization from there itself. I would also agree with that approach, 
although that requires some beefing up first. That, however, would be another 
redesign if we wanted to pursue that.

Also, getConf() returns Configuration because the HiveAuthorizationProvider 
implements Configurable, and we use a hadoop interface in the process. That 
would be broadening the scope significantly, and if you want to go around 
changing Configurable to HiveConfigurable all over the place, that should again 
be a different task to undertake. :)


> StorageBasedAuthorizationProvider is not correctly able to determine that it 
> is running from client-side
> 
>
> Key: HIVE-5402
> URL: https://issues.apache.org/jira/browse/HIVE-5402
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-5402.2.patch, HIVE-5402.patch
>
>
> HIVE-5048 tried to change the StorageBasedAuthorizationProvider (SBAP) so 
> that it could be run from the client side as well.
> However, there is a bug that causes SBAP to incorrectly conclude that it's 
> running from the metastore-side when it's actually running from the 
> client-side that causes it to throw a IllegalStateException claiming the 
> warehouse variable isn't set.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-4542) TestJdbcDriver2.testMetaDataGetSchemas fails because of unexpected database

2013-10-04 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-4542:


Fix Version/s: (was: 0.13.0)
   0.12.0

> TestJdbcDriver2.testMetaDataGetSchemas fails because of unexpected database
> ---
>
> Key: HIVE-4542
> URL: https://issues.apache.org/jira/browse/HIVE-4542
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Thejas M Nair
>Assignee: Vaibhav Gumashta
> Fix For: 0.12.0
>
> Attachments: D13269.1.patch, D13269.2.patch, HIVE-4542.1.patch, 
> HIVE-4542.2.patch, HIVE-4542.3.patch
>
>
> The check for database name in TestJdbcDriver2.testMetaDataGetSchemas fails 
> with the error -
> {code}
> junit.framework.ComparisonFailure: expected:<...efault> but was:<...bname>
> {code}
> ie, a database called dbname is found, which it does not expect. This failure 
> will happen depending on the order in which the function get the databases, 
> if "default" database is the first one, it succeeds.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5334) Milestone 3: Some tests pass under maven

2013-10-04 Thread Edward Capriolo (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786758#comment-13786758
 ] 

Edward Capriolo commented on HIVE-5334:
---

Looks fine

> Milestone 3: Some tests pass under maven
> 
>
> Key: HIVE-5334
> URL: https://issues.apache.org/jira/browse/HIVE-5334
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-5334.patch, HIVE-5334.patch
>
>
> This milestone is that some tests pass and therefore we have the basic unit 
> test environment setup. We'll hunt down the rest of the failing tests in 
> future jiras.
> NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5432) self join for a table with serde definition fails with classNotFoundException, single queries work fine

2013-10-04 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786755#comment-13786755
 ] 

Xuefu Zhang commented on HIVE-5432:
---

[~nitinpawar432] Could you please try your query with latest trunk? It seems 
not reproducible there.

> self join for a table with serde definition fails with 
> classNotFoundException, single queries work fine
> ---
>
> Key: HIVE-5432
> URL: https://issues.apache.org/jira/browse/HIVE-5432
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 0.11.0
> Environment: rhel6.4 
>Reporter: Nitin Pawar
>Assignee: Xuefu Zhang
>
> Steps to reproduce 
> {code}
> hive> add jar /home/hive/udfs/hive-serdes-1.0-SNAPSHOT.jar;   
>  
> Added /home/hive/udfs/hive-serdes-1.0-SNAPSHOT.jar to class path
> Added resource: /home/hive/udfs/hive-serdes-1.0-SNAPSHOT.jar
> hive> create table if not exists test(a string,b string) ROW FORMAT SERDE 
> 'com.cloudera.hive.serde.JSONSerDe';
> OK
> Time taken: 0.159 seconds
> hive> load data local inpath '/tmp/1' overwrite into table test;  
>  
> Copying data from file:/tmp/1
> Copying file: file:/tmp/1
> Loading data to table default.test
> Table default.test stats: [num_partitions: 0, num_files: 1, num_rows: 0, 
> total_size: 51, raw_data_size: 0]
> OK
> Time taken: 0.659 seconds
> hive> select a from test;
> Total MapReduce jobs = 1
> Launching Job 1 out of 1
> ...
> ...
> hive> select * from (select b from test where a="test")x join (select b from 
> test where a="test1")y on (x.b = y.b);
> Total MapReduce jobs = 1
> setting HADOOP_USER_NAME  hive
> Execution log at: /tmp/hive/.log
> java.lang.ClassNotFoundException: com.cloudera.hive.serde.JSONSerDe
> Continuing ...
> 2013-10-03 05:13:00   Starting to launch local task to process map join;  
> maximum memory = 1065484288
> org.apache.hadoop.hive.ql.metadata.HiveException: Failed with exception 
> nulljava.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getRowInspectorFromTable(FetchOperator.java:230)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getOutputObjectInspector(FetchOperator.java:595)
>   at 
> org.apache.hadoop.hive.ql.exec.MapredLocalTask.initializeOperators(MapredLocalTask.java:406)
>   at 
> org.apache.hadoop.hive.ql.exec.MapredLocalTask.executeFromChildJVM(MapredLocalTask.java:290)
>   at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:682)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
>   at 
> org.apache.hadoop.hive.ql.exec.FetchOperator.getOutputObjectInspector(FetchOperator.java:631)
>   at 
> org.apache.hadoop.hive.ql.exec.MapredLocalTask.initializeOperators(MapredLocalTask.java:406)
>   at 
> org.apache.hadoop.hive.ql.exec.MapredLocalTask.executeFromChildJVM(MapredLocalTask.java:290)
>   at org.apache.hadoop.hive.ql.exec.ExecDriver.main(ExecDriver.java:682)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:160)
> Execution failed with exit status: 2
> Obtaining error information
> Task failed!
> Task ID:
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-784) Support uncorrelated subqueries in the WHERE clause

2013-10-04 Thread Harish Butani (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786749#comment-13786749
 ] 

Harish Butani commented on HIVE-784:


Just uploaded a script with tpch queries Q4, Q15, Q16 and Q18 written using 
SubQueries.
Validated results against what is in the Spec.
Script includes ddl at the bottom.

> Support uncorrelated subqueries in the WHERE clause
> ---
>
> Key: HIVE-784
> URL: https://issues.apache.org/jira/browse/HIVE-784
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Processor
>Reporter: Ning Zhang
>Assignee: Matthew Weaver
> Attachments: HIVE-784.1.patch.txt, HIVE-784.2.patch, 
> SubQuerySpec.pdf, tpchQueriesUsingSubQueryClauses.sql
>
>
> Hive currently only support views in the FROM-clause, some Facebook use cases 
> suggest that Hive should support subqueries such as those connected by 
> IN/EXISTS in the WHERE-clause. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-784) Support uncorrelated subqueries in the WHERE clause

2013-10-04 Thread Harish Butani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-784?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Butani updated HIVE-784:
---

Attachment: tpchQueriesUsingSubQueryClauses.sql

> Support uncorrelated subqueries in the WHERE clause
> ---
>
> Key: HIVE-784
> URL: https://issues.apache.org/jira/browse/HIVE-784
> Project: Hive
>  Issue Type: New Feature
>  Components: Query Processor
>Reporter: Ning Zhang
>Assignee: Matthew Weaver
> Attachments: HIVE-784.1.patch.txt, HIVE-784.2.patch, 
> SubQuerySpec.pdf, tpchQueriesUsingSubQueryClauses.sql
>
>
> Hive currently only support views in the FROM-clause, some Facebook use cases 
> suggest that Hive should support subqueries such as those connected by 
> IN/EXISTS in the WHERE-clause. 



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5451) Use IOContext instead of HADOOPMAPFILENAME in MapOperator setup for Tez

2013-10-04 Thread Gunther Hagleitner (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786740#comment-13786740
 ] 

Gunther Hagleitner commented on HIVE-5451:
--

Sorry - I meant branch. I committed to Tez branch *not* trunk.

> Use IOContext instead of HADOOPMAPFILENAME in MapOperator setup for Tez
> ---
>
> Key: HIVE-5451
> URL: https://issues.apache.org/jira/browse/HIVE-5451
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Fix For: tez-branch
>
> Attachments: HIVE-5451.1.patch
>
>
> In mapop we use a conf variable to determine input file, everywhere else we 
> rely on IOContext. In tez the problem is that the recordreader and processor 
> have different confs. So they can't communicate via job conf.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HIVE-5451) Use IOContext instead of HADOOPMAPFILENAME in MapOperator setup for Tez

2013-10-04 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner resolved HIVE-5451.
--

Resolution: Fixed

Committed to trunk.

> Use IOContext instead of HADOOPMAPFILENAME in MapOperator setup for Tez
> ---
>
> Key: HIVE-5451
> URL: https://issues.apache.org/jira/browse/HIVE-5451
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Fix For: tez-branch
>
> Attachments: HIVE-5451.1.patch
>
>
> In mapop we use a conf variable to determine input file, everywhere else we 
> rely on IOContext. In tez the problem is that the recordreader and processor 
> have different confs. So they can't communicate via job conf.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5451) Use IOContext instead of HADOOPMAPFILENAME in MapOperator setup for Tez

2013-10-04 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner updated HIVE-5451:
-

Attachment: HIVE-5451.1.patch

> Use IOContext instead of HADOOPMAPFILENAME in MapOperator setup for Tez
> ---
>
> Key: HIVE-5451
> URL: https://issues.apache.org/jira/browse/HIVE-5451
> Project: Hive
>  Issue Type: Bug
>Reporter: Gunther Hagleitner
>Assignee: Gunther Hagleitner
> Fix For: tez-branch
>
> Attachments: HIVE-5451.1.patch
>
>
> In mapop we use a conf variable to determine input file, everywhere else we 
> rely on IOContext. In tez the problem is that the recordreader and processor 
> have different confs. So they can't communicate via job conf.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5365) Boolean constants in the query are not handled correctly.

2013-10-04 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HIVE-5365:
---

Attachment: HIVE-5365.3.patch

Uploading the same patch to trigger pre-commit build.

> Boolean constants in the query are not handled correctly.
> -
>
> Key: HIVE-5365
> URL: https://issues.apache.org/jira/browse/HIVE-5365
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HIVE-5365.1.patch, HIVE-5365.2.patch, HIVE-5365.3.patch
>
>
> Boolean constants in the query are not handled correctly.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5365) Boolean constants in the query are not handled correctly.

2013-10-04 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HIVE-5365:
---

Status: Patch Available  (was: Open)

> Boolean constants in the query are not handled correctly.
> -
>
> Key: HIVE-5365
> URL: https://issues.apache.org/jira/browse/HIVE-5365
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HIVE-5365.1.patch, HIVE-5365.2.patch, HIVE-5365.3.patch
>
>
> Boolean constants in the query are not handled correctly.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5115) Webhcat e2e tests TestMapReduce_1 and TestHeartbeat_2 require changes for Hadoop 2

2013-10-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786733#comment-13786733
 ] 

Hudson commented on HIVE-5115:
--

FAILURE: Integrated in Hive-trunk-hadoop2 #479 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/479/])
HIVE-5115 : Webhcat e2e tests TestMapReduce_1 and TestHeartbeat_2 require 
changes for Hadoop 2 (Deepesh Khandelwal via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1529328)
* /hive/trunk/hcatalog/src/test/e2e/templeton/README.txt
* /hive/trunk/hcatalog/src/test/e2e/templeton/tests/jobsubmission2.conf


> Webhcat e2e tests TestMapReduce_1 and TestHeartbeat_2 require changes for 
> Hadoop 2
> --
>
> Key: HIVE-5115
> URL: https://issues.apache.org/jira/browse/HIVE-5115
> Project: Hive
>  Issue Type: Bug
>  Components: Tests, WebHCat
>Affects Versions: 0.12.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
> Attachments: HIVE-5115.patch
>
>
> In the webhcat e2e testsuite we have two MR job submission tests
> TestMapReduce_1 (in jobsubmission.conf) runs the hadoop "wordcount" example. 
> Intention of this one is to test MR job submission using WebHCat.
> TestHeartbeat_2 (in jobsubmission2.conf) runs the hadoop "sleep" example. 
> Intention of this one is to test a long running (>10min) WebHCat MR job, see 
> HIVE-4808.
> In Hadoop 1, both of these example MR applications are packaged in 
> hadoop-examples.jar
> In Hadoop 2, "sleep" job is bundled in hadoop-mapreduce-client-jobclient.jar 
> and "wordcount" is bundled in hadoop-mapreduce-examples.jar
> Currently the webhcat tests assume that both these MR applications are in one 
> jar that we copy as hexamples.jar.
> To run these against Hadoop 2 I can think of three simple solutions:
> (1) Stick with one jar and run "sleep" application in the TestMapReduce_1 
> test as well.
> (2) Eliminate the test TestMapReduce_1 as TestHeartbeat_2 runs a MR job as 
> well.
> (3) Require two different jars for Hadoop 2 and call them hclient.jar 
> (containing "sleep" application) and hexamples.jar (containing "wordcount" 
> application). For Hadoop 1, we would make two copies of the same 
> hadoop-examples.jar application and call them hsleep.jar and examples.jar.
> The three approaches mentioned here would require least of changes. My 
> inclination is towards (2).
> Let me know what you think and I can provide the patch.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5433) Fix varchar unit tests to work with hadoop-2.1.1

2013-10-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786732#comment-13786732
 ] 

Hudson commented on HIVE-5433:
--

FAILURE: Integrated in Hive-trunk-hadoop2 #479 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/479/])
HIVE-5433: Fix varchar unit tests to work with hadoop-2.1.1 (Jason Dere via 
Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1529338)
* /hive/trunk/ql/src/test/queries/clientpositive/alter_varchar1.q
* /hive/trunk/ql/src/test/queries/clientpositive/varchar_1.q
* /hive/trunk/ql/src/test/queries/clientpositive/varchar_nested_types.q
* /hive/trunk/ql/src/test/queries/clientpositive/varchar_udf1.q
* /hive/trunk/ql/src/test/results/clientpositive/alter_varchar1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/varchar_1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/varchar_nested_types.q.out
* /hive/trunk/ql/src/test/results/clientpositive/varchar_udf1.q.out


> Fix varchar unit tests to work with hadoop-2.1.1
> 
>
> Key: HIVE-5433
> URL: https://issues.apache.org/jira/browse/HIVE-5433
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Reporter: Jason Dere
>Assignee: Jason Dere
> Fix For: 0.12.0
>
> Attachments: HIVE-5433.1.patch, HIVE-5433.1.patch
>
>
> A few of the varchar tests fail when testing against hadoop-2.1.1.  It looks 
> like some of the input/output rows used in the tests need to be sorted so 
> that the results look consistent across hadoop versions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5451) Use IOContext instead of HADOOPMAPFILENAME in MapOperator setup for Tez

2013-10-04 Thread Gunther Hagleitner (JIRA)
Gunther Hagleitner created HIVE-5451:


 Summary: Use IOContext instead of HADOOPMAPFILENAME in MapOperator 
setup for Tez
 Key: HIVE-5451
 URL: https://issues.apache.org/jira/browse/HIVE-5451
 Project: Hive
  Issue Type: Bug
Reporter: Gunther Hagleitner
Assignee: Gunther Hagleitner
 Fix For: tez-branch


In mapop we use a conf variable to determine input file, everywhere else we 
rely on IOContext. In tez the problem is that the recordreader and processor 
have different confs. So they can't communicate via job conf.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5365) Boolean constants in the query are not handled correctly.

2013-10-04 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HIVE-5365:
---

Status: Open  (was: Patch Available)

> Boolean constants in the query are not handled correctly.
> -
>
> Key: HIVE-5365
> URL: https://issues.apache.org/jira/browse/HIVE-5365
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HIVE-5365.1.patch, HIVE-5365.2.patch
>
>
> Boolean constants in the query are not handled correctly.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5449) Hive schematool info option incorrectly reports error for Postgres metastore

2013-10-04 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786730#comment-13786730
 ] 

Ashutosh Chauhan commented on HIVE-5449:


+1

> Hive schematool info option incorrectly reports error for Postgres metastore
> 
>
> Key: HIVE-5449
> URL: https://issues.apache.org/jira/browse/HIVE-5449
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.12.0, 0.13.0
>Reporter: Prasad Mujumdar
>Assignee: Prasad Mujumdar
> Attachments: HIVE-5449.1.patch
>
>
> The schema tool has an option to verify the schema version stored in the 
> metastore. This is implemented as a simple select query executed via JDBC. 
> The problem is that Postgres requires object names to be quoted due to the 
> way tables are created. It's a similar issues hit by metastore direct SQL 
> (HIVE-5264,  HIVE-5265 etc).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5450) pTest2 self-test is failing

2013-10-04 Thread Ashutosh Chauhan (JIRA)
Ashutosh Chauhan created HIVE-5450:
--

 Summary: pTest2 self-test is failing
 Key: HIVE-5450
 URL: https://issues.apache.org/jira/browse/HIVE-5450
 Project: Hive
  Issue Type: Task
  Components: Testing Infrastructure
Affects Versions: 0.13.0
Reporter: Ashutosh Chauhan


Following test failed when I ran mvn package:
{code}
Running org.apache.hive.ptest.execution.TestPhase
2013-10-04 22:57:20,150 ERROR HostExecutor$5.call:379 Aborting drone during 
exec echo org.apache.hive.ptest.execution.AbortDroneException: Drone Drone 
[user=someuser, host=somehost, instance=0] exited with 255: SSHResult 
[command=echo, getExitCode()=255, getException()=null, getUser()=someuser, 
getHost()=somehost, getInstance()=0]
at 
org.apache.hive.ptest.execution.HostExecutor$5.call(HostExecutor.java:379)
at 
org.apache.hive.ptest.execution.HostExecutor$5.call(HostExecutor.java:368)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)

Tests in error: 
  test(org.apache.hive.ptest.execution.TestReportParser): 
src/test/resources/test-outputs/.svn (Is a directory)

Tests run: 44, Failures: 0, Errors: 1, Skipped: 0
{code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-4542) TestJdbcDriver2.testMetaDataGetSchemas fails because of unexpected database

2013-10-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786724#comment-13786724
 ] 

Hudson commented on HIVE-4542:
--

FAILURE: Integrated in Hive-branch-0.12-hadoop2 #6 (See 
[https://builds.apache.org/job/Hive-branch-0.12-hadoop2/6/])
HIVE-4542 : TestJdbcDriver2.testMetaDataGetSchemas fails because of unexpected 
database (Vaibhav Gumashta & Thejas Nair via Ashutosh Chauhan) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1529259)
* 
/hive/branches/branch-0.12/jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java
* 
/hive/branches/branch-0.12/ql/src/test/org/apache/hadoop/hive/ql/metadata/TestHiveMetaStoreChecker.java


> TestJdbcDriver2.testMetaDataGetSchemas fails because of unexpected database
> ---
>
> Key: HIVE-4542
> URL: https://issues.apache.org/jira/browse/HIVE-4542
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 0.12.0
>Reporter: Thejas M Nair
>Assignee: Vaibhav Gumashta
> Fix For: 0.13.0
>
> Attachments: D13269.1.patch, D13269.2.patch, HIVE-4542.1.patch, 
> HIVE-4542.2.patch, HIVE-4542.3.patch
>
>
> The check for database name in TestJdbcDriver2.testMetaDataGetSchemas fails 
> with the error -
> {code}
> junit.framework.ComparisonFailure: expected:<...efault> but was:<...bname>
> {code}
> ie, a database called dbname is found, which it does not expect. This failure 
> will happen depending on the order in which the function get the databases, 
> if "default" database is the first one, it succeeds.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5433) Fix varchar unit tests to work with hadoop-2.1.1

2013-10-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786725#comment-13786725
 ] 

Hudson commented on HIVE-5433:
--

FAILURE: Integrated in Hive-branch-0.12-hadoop2 #6 (See 
[https://builds.apache.org/job/Hive-branch-0.12-hadoop2/6/])
HIVE-5433: Fix varchar unit tests to work with hadoop-2.1.1 (Jason Dere via 
Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1529339)
* /hive/branches/branch-0.12/ql/src/test/queries/clientpositive/alter_varchar1.q
* /hive/branches/branch-0.12/ql/src/test/queries/clientpositive/varchar_1.q
* 
/hive/branches/branch-0.12/ql/src/test/queries/clientpositive/varchar_nested_types.q
* /hive/branches/branch-0.12/ql/src/test/queries/clientpositive/varchar_udf1.q
* 
/hive/branches/branch-0.12/ql/src/test/results/clientpositive/alter_varchar1.q.out
* /hive/branches/branch-0.12/ql/src/test/results/clientpositive/varchar_1.q.out
* 
/hive/branches/branch-0.12/ql/src/test/results/clientpositive/varchar_nested_types.q.out
* 
/hive/branches/branch-0.12/ql/src/test/results/clientpositive/varchar_udf1.q.out


> Fix varchar unit tests to work with hadoop-2.1.1
> 
>
> Key: HIVE-5433
> URL: https://issues.apache.org/jira/browse/HIVE-5433
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Reporter: Jason Dere
>Assignee: Jason Dere
> Fix For: 0.12.0
>
> Attachments: HIVE-5433.1.patch, HIVE-5433.1.patch
>
>
> A few of the varchar tests fail when testing against hadoop-2.1.1.  It looks 
> like some of the input/output rows used in the tests need to be sorted so 
> that the results look consistent across hadoop versions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5364) NPE on some queries from partitioned orc table

2013-10-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786726#comment-13786726
 ] 

Hudson commented on HIVE-5364:
--

FAILURE: Integrated in Hive-branch-0.12-hadoop2 #6 (See 
[https://builds.apache.org/job/Hive-branch-0.12-hadoop2/6/])
HIVE-5364 : NPE on some queries from partitioned orc table (Owen O'Malley via 
Gunther Hagleitner) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1529098)
* /hive/branches/branch-0.12/data/files/orc_create_people.txt
* 
/hive/branches/branch-0.12/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java
* 
/hive/branches/branch-0.12/ql/src/java/org/apache/hadoop/hive/ql/io/orc/RecordReaderImpl.java
* /hive/branches/branch-0.12/ql/src/test/queries/clientpositive/orc_create.q
* /hive/branches/branch-0.12/ql/src/test/results/clientpositive/orc_create.q.out


> NPE on some queries from partitioned orc table
> --
>
> Key: HIVE-5364
> URL: https://issues.apache.org/jira/browse/HIVE-5364
> Project: Hive
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
>Priority: Blocker
> Fix For: 0.12.0
>
> Attachments: D13215.1.patch
>
>
> If you create a partitioned ORC table with:
> {code}
> create table A
> ...
> PARTITIONED BY (
> year int,
> month int,
> day int)
> {code}
> This query will fail:
> select count from A where where year=2013 and month=9 and day=15;



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5449) Hive schematool info option incorrectly reports error for Postgres metastore

2013-10-04 Thread Prasad Mujumdar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786717#comment-13786717
 ] 

Prasad Mujumdar commented on HIVE-5449:
---

* Without the patch
{noformat}
$ build/dist/bin/schematool -dbType postgres -info
Metastore connection URL:jdbc:postgresql://localhost/metastore
Metastore Connection Driver :org.postgresql.Driver
Metastore connection User:   hive
Hive distribution version:   0.13.0
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema 
version.
*** schemaTool failed ***
{noformat}

* With the patch  
{noformat}
]$ build/dist/bin/schematool -dbType postgres -info
Metastore connection URL:jdbc:postgresql://localhost/metastore
Metastore Connection Driver :org.postgresql.Driver
Metastore connection User:   hive
Hive distribution version:   0.13.0
Metastore schema version:0.13.0
schemaTool completeted
{noformat}
~   



> Hive schematool info option incorrectly reports error for Postgres metastore
> 
>
> Key: HIVE-5449
> URL: https://issues.apache.org/jira/browse/HIVE-5449
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.12.0, 0.13.0
>Reporter: Prasad Mujumdar
>Assignee: Prasad Mujumdar
> Attachments: HIVE-5449.1.patch
>
>
> The schema tool has an option to verify the schema version stored in the 
> metastore. This is implemented as a simple select query executed via JDBC. 
> The problem is that Postgres requires object names to be quoted due to the 
> way tables are created. It's a similar issues hit by metastore direct SQL 
> (HIVE-5264,  HIVE-5265 etc).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5449) Hive schematool info option incorrectly reports error for Postgres metastore

2013-10-04 Thread Prasad Mujumdar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasad Mujumdar updated HIVE-5449:
--

Attachment: HIVE-5449.1.patch

Patch attached

> Hive schematool info option incorrectly reports error for Postgres metastore
> 
>
> Key: HIVE-5449
> URL: https://issues.apache.org/jira/browse/HIVE-5449
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.12.0, 0.13.0
>Reporter: Prasad Mujumdar
>Assignee: Prasad Mujumdar
> Attachments: HIVE-5449.1.patch
>
>
> The schema tool has an option to verify the schema version stored in the 
> metastore. This is implemented as a simple select query executed via JDBC. 
> The problem is that Postgres requires object names to be quoted due to the 
> way tables are created. It's a similar issues hit by metastore direct SQL 
> (HIVE-5264,  HIVE-5265 etc).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5385) StringUtils is not in commons codec 1.3

2013-10-04 Thread Yin Huai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yin Huai updated HIVE-5385:
---

Status: Patch Available  (was: Open)

> StringUtils is not in commons codec 1.3
> ---
>
> Key: HIVE-5385
> URL: https://issues.apache.org/jira/browse/HIVE-5385
> Project: Hive
>  Issue Type: Bug
>Reporter: Yin Huai
>Assignee: Kousuke Saruta
>Priority: Trivial
> Attachments: HIVE-5385.1.patch, HIVE-5385.2.patch
>
>
> In ThriftHttpServlet introduced by HIVE-4763, StringUtils is imported which 
> was introduced by commons codec 1.4. But, our 0.20 shims depends on commons 
> codec 1.3. Our eclipse classpath template is also using libs of 0.20 shims. 
> So, we will get two errors in eclipse. 
> Compiling hive will not have a problem because we are loading codec 1.4 for 
> project service (1.4 is also used when "-Dhadoop.version=0.20.2 
> -Dhadoop.mr.rev=20").



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5449) Hive schematool info option incorrectly reports error for Postgres metastore

2013-10-04 Thread Prasad Mujumdar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasad Mujumdar updated HIVE-5449:
--

Status: Patch Available  (was: Open)

Review request on https://reviews.apache.org/r/14500/

> Hive schematool info option incorrectly reports error for Postgres metastore
> 
>
> Key: HIVE-5449
> URL: https://issues.apache.org/jira/browse/HIVE-5449
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Affects Versions: 0.12.0, 0.13.0
>Reporter: Prasad Mujumdar
>Assignee: Prasad Mujumdar
> Attachments: HIVE-5449.1.patch
>
>
> The schema tool has an option to verify the schema version stored in the 
> metastore. This is implemented as a simple select query executed via JDBC. 
> The problem is that Postgres requires object names to be quoted due to the 
> way tables are created. It's a similar issues hit by metastore direct SQL 
> (HIVE-5264,  HIVE-5265 etc).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Review Request 14500: HIVE-5449: Hive schematool info option incorrectly reports error for Postgres metastore

2013-10-04 Thread Prasad Mujumdar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14500/
---

Review request for hive and Ashutosh Chauhan.


Bugs: HIVE-5449
https://issues.apache.org/jira/browse/HIVE-5449


Repository: hive-git


Description
---

The schema tool has an option to verify the schema version stored in the 
metastore. This is implemented as a simple select query executed via JDBC. The 
problem is that Postgres requires object names to be quoted due to the way 
tables are created. It's a similar issues hit by metastore direct SQL 
(HIVE-5264, HIVE-5265 etc).
The patch is to use quoted identifiers when required.


Diffs
-

  beeline/src/java/org/apache/hive/beeline/HiveSchemaHelper.java 23e2fc7 
  beeline/src/java/org/apache/hive/beeline/HiveSchemaTool.java a1f9a6a 

Diff: https://reviews.apache.org/r/14500/diff/


Testing
---

Manually tested with postgres metastore


Thanks,

Prasad Mujumdar



[jira] [Updated] (HIVE-5447) HiveServer2 should allow secure impersonation over LDAP or other non-kerberos connection

2013-10-04 Thread Prasad Mujumdar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasad Mujumdar updated HIVE-5447:
--

Status: Patch Available  (was: Open)

Review request on https://reviews.apache.org/r/14498/

> HiveServer2 should allow secure impersonation over LDAP or other non-kerberos 
> connection
> 
>
> Key: HIVE-5447
> URL: https://issues.apache.org/jira/browse/HIVE-5447
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 0.11.0, 0.12.0
>Reporter: Prasad Mujumdar
>Assignee: Prasad Mujumdar
> Attachments: HIVE-5447.1.patch
>
>
> Currently the impersonation on a secure hadoop cluster only works when HS2 
> connection itself is kerberos. This forces clients to configure kerberos 
> which can be a deployment nightmare.
> We should allow other authentications mechanism to perform secure 
> impersonation.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5447) HiveServer2 should allow secure impersonation over LDAP or other non-kerberos connection

2013-10-04 Thread Prasad Mujumdar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasad Mujumdar updated HIVE-5447:
--

Attachment: HIVE-5447.1.patch

Patch Attached

> HiveServer2 should allow secure impersonation over LDAP or other non-kerberos 
> connection
> 
>
> Key: HIVE-5447
> URL: https://issues.apache.org/jira/browse/HIVE-5447
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 0.11.0, 0.12.0
>Reporter: Prasad Mujumdar
>Assignee: Prasad Mujumdar
> Attachments: HIVE-5447.1.patch
>
>
> Currently the impersonation on a secure hadoop cluster only works when HS2 
> connection itself is kerberos. This forces clients to configure kerberos 
> which can be a deployment nightmare.
> We should allow other authentications mechanism to perform secure 
> impersonation.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Review Request 14498: HIVE-5447: HiveServer2 should allow secure impersonation over LDAP or other non-kerberos connection

2013-10-04 Thread Prasad Mujumdar

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14498/
---

Review request for hive.


Bugs: HIVE-5447
https://issues.apache.org/jira/browse/HIVE-5447


Repository: hive-git


Description
---

Invoke session level secure impersonation in even when HS2 authentication is 
not set to kerberos. The session level authentication already handles secure 
impersonation based on the short name.


Diffs
-

  service/src/java/org/apache/hive/service/auth/PlainSaslHelper.java 15b1675 
  service/src/java/org/apache/hive/service/cli/thrift/ThriftCLIService.java 
857e627 

Diff: https://reviews.apache.org/r/14498/diff/


Testing
---

Manually tested with simple authentication on top of secure cluster.


Thanks,

Prasad Mujumdar



[jira] [Updated] (HIVE-5433) Fix varchar unit tests to work with hadoop-2.1.1

2013-10-04 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5433:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Patch committed to trunk and 0.12 branch. This will help get the test failures 
for 0.12 with hadoop 2 down!
Thanks for the contribution Jason! 

> Fix varchar unit tests to work with hadoop-2.1.1
> 
>
> Key: HIVE-5433
> URL: https://issues.apache.org/jira/browse/HIVE-5433
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Reporter: Jason Dere
>Assignee: Jason Dere
> Attachments: HIVE-5433.1.patch, HIVE-5433.1.patch
>
>
> A few of the varchar tests fail when testing against hadoop-2.1.1.  It looks 
> like some of the input/output rows used in the tests need to be sorted so 
> that the results look consistent across hadoop versions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5433) Fix varchar unit tests to work with hadoop-2.1.1

2013-10-04 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5433:


Fix Version/s: 0.12.0

> Fix varchar unit tests to work with hadoop-2.1.1
> 
>
> Key: HIVE-5433
> URL: https://issues.apache.org/jira/browse/HIVE-5433
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Reporter: Jason Dere
>Assignee: Jason Dere
> Fix For: 0.12.0
>
> Attachments: HIVE-5433.1.patch, HIVE-5433.1.patch
>
>
> A few of the varchar tests fail when testing against hadoop-2.1.1.  It looks 
> like some of the input/output rows used in the tests need to be sorted so 
> that the results look consistent across hadoop versions.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5449) Hive schematool info option incorrectly reports error for Postgres metastore

2013-10-04 Thread Prasad Mujumdar (JIRA)
Prasad Mujumdar created HIVE-5449:
-

 Summary: Hive schematool info option incorrectly reports error for 
Postgres metastore
 Key: HIVE-5449
 URL: https://issues.apache.org/jira/browse/HIVE-5449
 Project: Hive
  Issue Type: Bug
  Components: Metastore
Affects Versions: 0.12.0, 0.13.0
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar


The schema tool has an option to verify the schema version stored in the 
metastore. This is implemented as a simple select query executed via JDBC. The 
problem is that Postgres requires object names to be quoted due to the way 
tables are created. It's a similar issues hit by metastore direct SQL 
(HIVE-5264,  HIVE-5265 etc).




--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5426) TestThriftBinaryCLIService tests fail on branch 0.12

2013-10-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786688#comment-13786688
 ] 

Hudson commented on HIVE-5426:
--

ABORTED: Integrated in Hive-branch-0.12-hadoop2 #5 (See 
[https://builds.apache.org/job/Hive-branch-0.12-hadoop2/5/])
HIVE-5426: TestThriftBinaryCLIService tests fail on branch 0.12 (Vaibhav 
Gumashta via Thejas Nair) (thejas: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1529064)
* 
/hive/branches/branch-0.12/service/src/test/org/apache/hive/service/cli/CLIServiceTest.java
* 
/hive/branches/branch-0.12/service/src/test/org/apache/hive/service/cli/thrift/ThriftCLIServiceTest.java


> TestThriftBinaryCLIService tests fail on branch 0.12
> 
>
> Key: HIVE-5426
> URL: https://issues.apache.org/jira/browse/HIVE-5426
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 0.12.0
>Reporter: Thejas M Nair
>Assignee: Vaibhav Gumashta
>Priority: Blocker
> Fix For: 0.12.0
>
> Attachments: HIVE-5426.1.patch
>
>
> Two tests of TestThriftBinaryCLIService are failing in branch 0.12.
> See 
> https://builds.apache.org/job/Hive-branch-0.12-hadoop1/lastCompletedBuild/testReport/



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5115) Webhcat e2e tests TestMapReduce_1 and TestHeartbeat_2 require changes for Hadoop 2

2013-10-04 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5115:


Resolution: Fixed
Status: Resolved  (was: Patch Available)

Patch committed to trunk. Thanks for the contribution Deepesh!

> Webhcat e2e tests TestMapReduce_1 and TestHeartbeat_2 require changes for 
> Hadoop 2
> --
>
> Key: HIVE-5115
> URL: https://issues.apache.org/jira/browse/HIVE-5115
> Project: Hive
>  Issue Type: Bug
>  Components: Tests, WebHCat
>Affects Versions: 0.12.0
>Reporter: Deepesh Khandelwal
>Assignee: Deepesh Khandelwal
> Attachments: HIVE-5115.patch
>
>
> In the webhcat e2e testsuite we have two MR job submission tests
> TestMapReduce_1 (in jobsubmission.conf) runs the hadoop "wordcount" example. 
> Intention of this one is to test MR job submission using WebHCat.
> TestHeartbeat_2 (in jobsubmission2.conf) runs the hadoop "sleep" example. 
> Intention of this one is to test a long running (>10min) WebHCat MR job, see 
> HIVE-4808.
> In Hadoop 1, both of these example MR applications are packaged in 
> hadoop-examples.jar
> In Hadoop 2, "sleep" job is bundled in hadoop-mapreduce-client-jobclient.jar 
> and "wordcount" is bundled in hadoop-mapreduce-examples.jar
> Currently the webhcat tests assume that both these MR applications are in one 
> jar that we copy as hexamples.jar.
> To run these against Hadoop 2 I can think of three simple solutions:
> (1) Stick with one jar and run "sleep" application in the TestMapReduce_1 
> test as well.
> (2) Eliminate the test TestMapReduce_1 as TestHeartbeat_2 runs a MR job as 
> well.
> (3) Require two different jars for Hadoop 2 and call them hclient.jar 
> (containing "sleep" application) and hexamples.jar (containing "wordcount" 
> application). For Hadoop 1, we would make two copies of the same 
> hadoop-examples.jar application and call them hsleep.jar and examples.jar.
> The three approaches mentioned here would require least of changes. My 
> inclination is towards (2).
> Let me know what you think and I can provide the patch.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5365) Boolean constants in the query are not handled correctly.

2013-10-04 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HIVE-5365:
---

Status: Patch Available  (was: Open)

> Boolean constants in the query are not handled correctly.
> -
>
> Key: HIVE-5365
> URL: https://issues.apache.org/jira/browse/HIVE-5365
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HIVE-5365.1.patch, HIVE-5365.2.patch
>
>
> Boolean constants in the query are not handled correctly.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5365) Boolean constants in the query are not handled correctly.

2013-10-04 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HIVE-5365:
---

Status: Open  (was: Patch Available)

> Boolean constants in the query are not handled correctly.
> -
>
> Key: HIVE-5365
> URL: https://issues.apache.org/jira/browse/HIVE-5365
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jitendra Nath Pandey
>Assignee: Jitendra Nath Pandey
> Attachments: HIVE-5365.1.patch, HIVE-5365.2.patch
>
>
> Boolean constants in the query are not handled correctly.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5402) StorageBasedAuthorizationProvider is not correctly able to determine that it is running from client-side

2013-10-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786674#comment-13786674
 ] 

Hive QA commented on HIVE-5402:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12606811/HIVE-5402.2.patch

{color:green}SUCCESS:{color} +1 4055 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1034/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1034/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

> StorageBasedAuthorizationProvider is not correctly able to determine that it 
> is running from client-side
> 
>
> Key: HIVE-5402
> URL: https://issues.apache.org/jira/browse/HIVE-5402
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-5402.2.patch, HIVE-5402.patch
>
>
> HIVE-5048 tried to change the StorageBasedAuthorizationProvider (SBAP) so 
> that it could be run from the client side as well.
> However, there is a bug that causes SBAP to incorrectly conclude that it's 
> running from the metastore-side when it's actually running from the 
> client-side that causes it to throw a IllegalStateException claiming the 
> warehouse variable isn't set.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5448) webhcat duplicate test TestMapReduce_2 should be removed

2013-10-04 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5448:


Status: Patch Available  (was: Open)

> webhcat duplicate test TestMapReduce_2 should be removed
> 
>
> Key: HIVE-5448
> URL: https://issues.apache.org/jira/browse/HIVE-5448
> Project: Hive
>  Issue Type: Bug
>  Components: Tests, WebHCat
>Affects Versions: 0.12.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Attachments: HIVE-5448.1.patch
>
>
> TestMapReduce_2 in jobsubmission.conf should be removed, as it is a duplicate 
> of TestHeartbeat_2 in jobsubmission2.conf
> NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5448) webhcat duplicate test TestMapReduce_2 should be removed

2013-10-04 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5448:


Attachment: HIVE-5448.1.patch

> webhcat duplicate test TestMapReduce_2 should be removed
> 
>
> Key: HIVE-5448
> URL: https://issues.apache.org/jira/browse/HIVE-5448
> Project: Hive
>  Issue Type: Bug
>  Components: Tests, WebHCat
>Affects Versions: 0.12.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Attachments: HIVE-5448.1.patch
>
>
> TestMapReduce_2 in jobsubmission.conf should be removed, as it is a duplicate 
> of TestHeartbeat_2 in jobsubmission2.conf



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5448) webhcat duplicate test TestMapReduce_2 should be removed

2013-10-04 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5448:


Description: 
TestMapReduce_2 in jobsubmission.conf should be removed, as it is a duplicate 
of TestHeartbeat_2 in jobsubmission2.conf

NO PRECOMMIT TESTS

  was:
TestMapReduce_2 in jobsubmission.conf should be removed, as it is a duplicate 
of TestHeartbeat_2 in jobsubmission2.conf




> webhcat duplicate test TestMapReduce_2 should be removed
> 
>
> Key: HIVE-5448
> URL: https://issues.apache.org/jira/browse/HIVE-5448
> Project: Hive
>  Issue Type: Bug
>  Components: Tests, WebHCat
>Affects Versions: 0.12.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Attachments: HIVE-5448.1.patch
>
>
> TestMapReduce_2 in jobsubmission.conf should be removed, as it is a duplicate 
> of TestHeartbeat_2 in jobsubmission2.conf
> NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5448) webhcat test TestMapReduce_2 is duplicate of TestHeartbeat_2

2013-10-04 Thread Thejas M Nair (JIRA)
Thejas M Nair created HIVE-5448:
---

 Summary: webhcat test TestMapReduce_2 is duplicate of 
TestHeartbeat_2
 Key: HIVE-5448
 URL: https://issues.apache.org/jira/browse/HIVE-5448
 Project: Hive
  Issue Type: Bug
  Components: Tests, WebHCat
Affects Versions: 0.12.0
Reporter: Thejas M Nair
Assignee: Thejas M Nair


TestMapReduce_2 in jobsubmission.conf should be removed, as it is a duplicate 
of TestHeartbeat_2 in jobsubmission2.conf





--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5448) webhcat duplicate test TestMapReduce_2 should be removed

2013-10-04 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair updated HIVE-5448:


Summary: webhcat duplicate test TestMapReduce_2 should be removed  (was: 
webhcat test TestMapReduce_2 is duplicate of TestHeartbeat_2)

> webhcat duplicate test TestMapReduce_2 should be removed
> 
>
> Key: HIVE-5448
> URL: https://issues.apache.org/jira/browse/HIVE-5448
> Project: Hive
>  Issue Type: Bug
>  Components: Tests, WebHCat
>Affects Versions: 0.12.0
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
>
> TestMapReduce_2 in jobsubmission.conf should be removed, as it is a duplicate 
> of TestHeartbeat_2 in jobsubmission2.conf



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5372) Refactor TypeInfo and PrimitiveTypeEntry class hierachy to eliminate info repetition

2013-10-04 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786665#comment-13786665
 ] 

Xuefu Zhang commented on HIVE-5372:
---

[~ashutoshc] Thanks for your comments. I agree that this needs to get completed 
in a timely manner, as I have already feeling the rebasing pain. I appreciated 
your quick response.

> Refactor TypeInfo and PrimitiveTypeEntry class hierachy to eliminate info 
> repetition
> 
>
> Key: HIVE-5372
> URL: https://issues.apache.org/jira/browse/HIVE-5372
> Project: Hive
>  Issue Type: Improvement
>  Components: Types
>Affects Versions: 0.12.0
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
> Fix For: 0.13.0
>
> Attachments: HIVE-5372.1.patch, HIVE-5372.patch
>
>
> TypeInfo with its sub-classes and PrimititiveTypeEntry class seem having 
> repetitive information, such as type names and type params. It will be good 
> if we can streamline the information organization.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5372) Refactor TypeInfo and PrimitiveTypeEntry class hierachy to eliminate info repetition

2013-10-04 Thread Xuefu Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xuefu Zhang updated HIVE-5372:
--

Attachment: HIVE-5372.1.patch

Patch updated, as the old one contained an import reference to a deleted class.

> Refactor TypeInfo and PrimitiveTypeEntry class hierachy to eliminate info 
> repetition
> 
>
> Key: HIVE-5372
> URL: https://issues.apache.org/jira/browse/HIVE-5372
> Project: Hive
>  Issue Type: Improvement
>  Components: Types
>Affects Versions: 0.12.0
>Reporter: Xuefu Zhang
>Assignee: Xuefu Zhang
> Fix For: 0.13.0
>
> Attachments: HIVE-5372.1.patch, HIVE-5372.patch
>
>
> TypeInfo with its sub-classes and PrimititiveTypeEntry class seem having 
> repetitive information, such as type names and type params. It will be good 
> if we can streamline the information organization.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5334) Milestone 3: Some tests pass under maven

2013-10-04 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786651#comment-13786651
 ] 

Brock Noland commented on HIVE-5334:


I plan on committing this Monday unless I hear otherwise as I am rapidly 
progressing with finishing off the TestCliDriver tests. Thanks!

> Milestone 3: Some tests pass under maven
> 
>
> Key: HIVE-5334
> URL: https://issues.apache.org/jira/browse/HIVE-5334
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-5334.patch, HIVE-5334.patch
>
>
> This milestone is that some tests pass and therefore we have the basic unit 
> test environment setup. We'll hunt down the rest of the failing tests in 
> future jiras.
> NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5443) Few hadoop2 .q.out needs to be updated

2013-10-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786647#comment-13786647
 ] 

Hudson commented on HIVE-5443:
--

FAILURE: Integrated in Hive-trunk-hadoop2 #478 (See 
[https://builds.apache.org/job/Hive-trunk-hadoop2/478/])
HIVE-5443 - Few hadoop2 .q.out needs to be updated (Ashutosh Chauhan via Brock 
Noland) (brock: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1529275)
* /hive/trunk/ql/src/test/results/clientpositive/combine2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/ctas.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input12.q.out
* /hive/trunk/ql/src/test/results/clientpositive/input39.q.out
* /hive/trunk/ql/src/test/results/clientpositive/list_bucket_dml_1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/list_bucket_dml_11.q.out
* /hive/trunk/ql/src/test/results/clientpositive/list_bucket_dml_12.q.out
* /hive/trunk/ql/src/test/results/clientpositive/list_bucket_dml_13.q.out
* /hive/trunk/ql/src/test/results/clientpositive/list_bucket_dml_2.q.out
* /hive/trunk/ql/src/test/results/clientpositive/list_bucket_dml_3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/list_bucket_dml_4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/list_bucket_dml_5.q.out
* /hive/trunk/ql/src/test/results/clientpositive/list_bucket_dml_6.q.out
* /hive/trunk/ql/src/test/results/clientpositive/list_bucket_dml_7.q.out
* /hive/trunk/ql/src/test/results/clientpositive/list_bucket_dml_8.q.out
* /hive/trunk/ql/src/test/results/clientpositive/list_bucket_dml_9.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/list_bucket_query_multiskew_1.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/list_bucket_query_multiskew_2.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/list_bucket_query_multiskew_3.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/list_bucket_query_oneskew_1.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/list_bucket_query_oneskew_2.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/list_bucket_query_oneskew_3.q.out
* /hive/trunk/ql/src/test/results/clientpositive/skewjoin_union_remove_1.q.out
* /hive/trunk/ql/src/test/results/clientpositive/skewjoin_union_remove_2.q.out
* 
/hive/trunk/ql/src/test/results/clientpositive/truncate_column_list_bucket.q.out
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_10.q.out
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_23.q.out
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_4.q.out
* /hive/trunk/ql/src/test/results/clientpositive/union_remove_5.q.out


> Few hadoop2 .q.out needs to be updated
> --
>
> Key: HIVE-5443
> URL: https://issues.apache.org/jira/browse/HIVE-5443
> Project: Hive
>  Issue Type: Task
>Affects Versions: 0.13.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Fix For: 0.13.0
>
> Attachments: HIVE-5443.2.patch, HIVE-5443.patch
>
>
> These hadoop2 only tests were not updated in HIVE-5223



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5402) StorageBasedAuthorizationProvider is not correctly able to determine that it is running from client-side

2013-10-04 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786640#comment-13786640
 ] 

Ashutosh Chauhan commented on HIVE-5402:


* Your explanation of that variable confuses me further. Seems, like this code 
can be refactored for not needing to know such finer details about from which 
class it was called from. Seems like brittle code. We should explore this more 
in another jira, otherwise plethora of these booleans makes code very hard to 
understand since it now results in very tight coupling of state in different 
classes.

* You got me :) Here also getConf() should return HiveConf, if that touches too 
many files, its ok to do cast for now, but in another jira we need to refactor 
this too.

> StorageBasedAuthorizationProvider is not correctly able to determine that it 
> is running from client-side
> 
>
> Key: HIVE-5402
> URL: https://issues.apache.org/jira/browse/HIVE-5402
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-5402.2.patch, HIVE-5402.patch
>
>
> HIVE-5048 tried to change the StorageBasedAuthorizationProvider (SBAP) so 
> that it could be run from the client side as well.
> However, there is a bug that causes SBAP to incorrectly conclude that it's 
> running from the metastore-side when it's actually running from the 
> client-side that causes it to throw a IllegalStateException claiming the 
> warehouse variable isn't set.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5402) StorageBasedAuthorizationProvider is not correctly able to determine that it is running from client-side

2013-10-04 Thread Sushanth Sowmyan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786607#comment-13786607
 ] 

Sushanth Sowmyan commented on HIVE-5402:


> Rename isRunFromMetaStore to serverMode

Hmm.. One one hand, that makes sense, since client-side running also launches a 
metastore, just that it is a local metastore. serverMode makes that clear. On 
the other hand, there's nothing that stops this from being run from that local 
metastore. The distinction here is whether it is being called from Hive or 
HiveMetaStore, and as such, isRunFromMetastore is more accurate here. That 
said, I'm not going to be a stickler about this, so am willing to change it if 
you still think it should change.

> Do Hive.get(getConf()) instead of Hive.get(new HiveConf()) because its 
> expensive object to initialize and we don't multiple copies of configuration 
> to be active in a process.

Hahaha, I expected this, and was specifically thinking of you when I wrote 
that, and hunted around in advance! :D Unfortunately, Hive.get does not accept 
a Configuration, which is what getConf() returns. It needs a HiveConf. If you 
prefer, I can do a instanceof check to see if the Configuration I have is 
actually a HiveConf, and if so, cast and give it directly.

> StorageBasedAuthorizationProvider is not correctly able to determine that it 
> is running from client-side
> 
>
> Key: HIVE-5402
> URL: https://issues.apache.org/jira/browse/HIVE-5402
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-5402.2.patch, HIVE-5402.patch
>
>
> HIVE-5048 tried to change the StorageBasedAuthorizationProvider (SBAP) so 
> that it could be run from the client side as well.
> However, there is a bug that causes SBAP to incorrectly conclude that it's 
> running from the metastore-side when it's actually running from the 
> client-side that causes it to throw a IllegalStateException claiming the 
> warehouse variable isn't set.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5385) StringUtils is not in commons codec 1.3

2013-10-04 Thread Kousuke Saruta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kousuke Saruta updated HIVE-5385:
-

Attachment: HIVE-5385.2.patch

I've attached a 2nd patch for this issue.

> StringUtils is not in commons codec 1.3
> ---
>
> Key: HIVE-5385
> URL: https://issues.apache.org/jira/browse/HIVE-5385
> Project: Hive
>  Issue Type: Bug
>Reporter: Yin Huai
>Assignee: Kousuke Saruta
>Priority: Trivial
> Attachments: HIVE-5385.1.patch, HIVE-5385.2.patch
>
>
> In ThriftHttpServlet introduced by HIVE-4763, StringUtils is imported which 
> was introduced by commons codec 1.4. But, our 0.20 shims depends on commons 
> codec 1.3. Our eclipse classpath template is also using libs of 0.20 shims. 
> So, we will get two errors in eclipse. 
> Compiling hive will not have a problem because we are loading codec 1.4 for 
> project service (1.4 is also used when "-Dhadoop.version=0.20.2 
> -Dhadoop.mr.rev=20").



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5402) StorageBasedAuthorizationProvider is not correctly able to determine that it is running from client-side

2013-10-04 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786571#comment-13786571
 ] 

Ashutosh Chauhan commented on HIVE-5402:


* Rename isRunFromMetaStore to serverMode
* Do Hive.get(getConf()) instead of Hive.get(new HiveConf()) because its 
expensive object to initialize and we don't multiple copies of configuration to 
be active in a process.
* Thanks for tests!


> StorageBasedAuthorizationProvider is not correctly able to determine that it 
> is running from client-side
> 
>
> Key: HIVE-5402
> URL: https://issues.apache.org/jira/browse/HIVE-5402
> Project: Hive
>  Issue Type: Bug
>  Components: Authorization
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Attachments: HIVE-5402.2.patch, HIVE-5402.patch
>
>
> HIVE-5048 tried to change the StorageBasedAuthorizationProvider (SBAP) so 
> that it could be run from the client side as well.
> However, there is a bug that causes SBAP to incorrectly conclude that it's 
> running from the metastore-side when it's actually running from the 
> client-side that causes it to throw a IllegalStateException claiming the 
> warehouse variable isn't set.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5443) Few hadoop2 .q.out needs to be updated

2013-10-04 Thread Brock Noland (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brock Noland updated HIVE-5443:
---

   Resolution: Fixed
Fix Version/s: 0.13.0
   Status: Resolved  (was: Patch Available)

Committed to trunk! Thank you for the contribution Ashutosh!

> Few hadoop2 .q.out needs to be updated
> --
>
> Key: HIVE-5443
> URL: https://issues.apache.org/jira/browse/HIVE-5443
> Project: Hive
>  Issue Type: Task
>Affects Versions: 0.13.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Fix For: 0.13.0
>
> Attachments: HIVE-5443.2.patch, HIVE-5443.patch
>
>
> These hadoop2 only tests were not updated in HIVE-5223



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5443) Few hadoop2 .q.out needs to be updated

2013-10-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786538#comment-13786538
 ] 

Hive QA commented on HIVE-5443:
---



{color:green}Overall{color}: +1 all checks pass

Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12606804/HIVE-5443.2.patch

{color:green}SUCCESS:{color} +1 4052 tests passed

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1033/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1033/console

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

> Few hadoop2 .q.out needs to be updated
> --
>
> Key: HIVE-5443
> URL: https://issues.apache.org/jira/browse/HIVE-5443
> Project: Hive
>  Issue Type: Task
>Affects Versions: 0.13.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-5443.2.patch, HIVE-5443.patch
>
>
> These hadoop2 only tests were not updated in HIVE-5223



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5155) Support secure proxy user access to HiveServer2

2013-10-04 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786514#comment-13786514
 ] 

Thejas M Nair commented on HIVE-5155:
-

Sorry about the delay in looking at the patch. I was hoping to get an RC out 
for 0.12 this weekend, and was planning to add only any blocker bug fixes to 
the branch until then. I will take a look at the patch tonight and see if I can 
get it into 0.12 .


> Support secure proxy user access to HiveServer2
> ---
>
> Key: HIVE-5155
> URL: https://issues.apache.org/jira/browse/HIVE-5155
> Project: Hive
>  Issue Type: Improvement
>  Components: Authentication, HiveServer2, JDBC
>Affects Versions: 0.12.0
>Reporter: Prasad Mujumdar
>Assignee: Prasad Mujumdar
> Attachments: HIVE-5155-1-nothrift.patch, HIVE-5155.1.patch, 
> HIVE-5155.2.patch, HIVE-5155.3.patch, HIVE-5155-noThrift.2.patch, 
> HIVE-5155-noThrift.4.patch, ProxyAuth.jar, ProxyAuth.java, ProxyAuth.results
>
>
> The HiveServer2 can authenticate a client using via Kerberos and impersonate 
> the connecting user with underlying secure hadoop. This becomes a gateway for 
> a remote client to access secure hadoop cluster. Now this works fine for when 
> the client obtains Kerberos ticket and directly connects to HiveServer2. 
> There's another big use case for middleware tools where the end user wants to 
> access Hive via another server. For example Oozie action or Hue submitting 
> queries or a BI tool server accessing to HiveServer2. In these cases, the 
> third party server doesn't have end user's Kerberos credentials and hence it 
> can't submit queries to HiveServer2 on behalf of the end user.
> This ticket is for enabling proxy access to HiveServer2 for third party tools 
> on behalf of end users. There are two parts of the solution proposed in this 
> ticket:
> 1) Delegation token based connection for Oozie (OOZIE-1457)
> This is the common mechanism for Hadoop ecosystem components. Hive Remote 
> Metastore and HCatalog already support this. This is suitable for tool like 
> Oozie that submits the MR jobs as actions on behalf of its client. Oozie 
> already uses similar mechanism for Metastore/HCatalog access.
> 2) Direct proxy access for privileged hadoop users
> The delegation token implementation can be a challenge for non-hadoop 
> (especially non-java) components. This second part enables a privileged user 
> to directly specify an alternate session user during the connection. If the 
> connecting user has hadoop level privilege to impersonate the requested 
> userid, then HiveServer2 will run the session as that requested user. For 
> example, user Hue is allowed to impersonate user Bob (via core-site.xml proxy 
> user configuration). Then user Hue can connect to HiveServer2 and specify Bob 
> as session user via a session property. HiveServer2 will verify Hue's proxy 
> user privilege and then impersonate user Bob instead of Hue. This will enable 
> any third party tool to impersonate alternate userid without having to 
> implement delegation token connection.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5155) Support secure proxy user access to HiveServer2

2013-10-04 Thread Prasad Mujumdar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786501#comment-13786501
 ] 

Prasad Mujumdar commented on HIVE-5155:
---

[~brocknoland] & [~thejas] Can we please consider this for 0.12. Its also 
blocking Oozie. Thanks!

> Support secure proxy user access to HiveServer2
> ---
>
> Key: HIVE-5155
> URL: https://issues.apache.org/jira/browse/HIVE-5155
> Project: Hive
>  Issue Type: Improvement
>  Components: Authentication, HiveServer2, JDBC
>Affects Versions: 0.12.0
>Reporter: Prasad Mujumdar
>Assignee: Prasad Mujumdar
> Attachments: HIVE-5155-1-nothrift.patch, HIVE-5155.1.patch, 
> HIVE-5155.2.patch, HIVE-5155.3.patch, HIVE-5155-noThrift.2.patch, 
> HIVE-5155-noThrift.4.patch, ProxyAuth.jar, ProxyAuth.java, ProxyAuth.results
>
>
> The HiveServer2 can authenticate a client using via Kerberos and impersonate 
> the connecting user with underlying secure hadoop. This becomes a gateway for 
> a remote client to access secure hadoop cluster. Now this works fine for when 
> the client obtains Kerberos ticket and directly connects to HiveServer2. 
> There's another big use case for middleware tools where the end user wants to 
> access Hive via another server. For example Oozie action or Hue submitting 
> queries or a BI tool server accessing to HiveServer2. In these cases, the 
> third party server doesn't have end user's Kerberos credentials and hence it 
> can't submit queries to HiveServer2 on behalf of the end user.
> This ticket is for enabling proxy access to HiveServer2 for third party tools 
> on behalf of end users. There are two parts of the solution proposed in this 
> ticket:
> 1) Delegation token based connection for Oozie (OOZIE-1457)
> This is the common mechanism for Hadoop ecosystem components. Hive Remote 
> Metastore and HCatalog already support this. This is suitable for tool like 
> Oozie that submits the MR jobs as actions on behalf of its client. Oozie 
> already uses similar mechanism for Metastore/HCatalog access.
> 2) Direct proxy access for privileged hadoop users
> The delegation token implementation can be a challenge for non-hadoop 
> (especially non-java) components. This second part enables a privileged user 
> to directly specify an alternate session user during the connection. If the 
> connecting user has hadoop level privilege to impersonate the requested 
> userid, then HiveServer2 will run the session as that requested user. For 
> example, user Hue is allowed to impersonate user Bob (via core-site.xml proxy 
> user configuration). Then user Hue can connect to HiveServer2 and specify Bob 
> as session user via a session property. HiveServer2 will verify Hue's proxy 
> user privilege and then impersonate user Bob instead of Hue. This will enable 
> any third party tool to impersonate alternate userid without having to 
> implement delegation token connection.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-4888) listPartitionsByFilter doesn't support lt/gt/lte/gte

2013-10-04 Thread Phabricator (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phabricator updated HIVE-4888:
--

Attachment: D13101.6.patch

sershe updated the revision "HIVE-4888 [jira] listPartitionsByFilter doesn't 
support lt/gt/lte/gte".

  wrong number in test

Reviewers: JIRA

REVISION DETAIL
  https://reviews.facebook.net/D13101

CHANGE SINCE LAST DIFF
  https://reviews.facebook.net/D13101?vs=40959&id=40995#toc

AFFECTED FILES
  metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
  metastore/src/java/org/apache/hadoop/hive/metastore/parser/ExpressionTree.java
  metastore/src/test/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java
  ql/src/test/org/apache/hadoop/hive/metastore/TestMetastoreExpr.java
  ql/src/test/queries/clientpositive/filter_numeric.q
  ql/src/test/results/clientpositive/filter_numeric.q.out
  serde/if/serde.thrift
  serde/src/gen/thrift/gen-cpp/serde_constants.cpp
  serde/src/gen/thrift/gen-cpp/serde_constants.h
  
serde/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/serde/serdeConstants.java
  serde/src/gen/thrift/gen-php/org/apache/hadoop/hive/serde/Types.php
  serde/src/gen/thrift/gen-py/org_apache_hadoop_hive_serde/constants.py
  serde/src/gen/thrift/gen-rb/serde_constants.rb

To: JIRA, sershe


> listPartitionsByFilter doesn't support lt/gt/lte/gte
> 
>
> Key: HIVE-4888
> URL: https://issues.apache.org/jira/browse/HIVE-4888
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: D13101.1.patch, D13101.2.patch, D13101.3.patch, 
> D13101.4.patch, D13101.5.patch, D13101.6.patch, HIVE-4888.00.patch, 
> HIVE-4888.01.patch, HIVE-4888.04.patch, HIVE-4888.05.patch, 
> HIVE-4888.06.patch, HIVE-4888.on-top-of-4914.patch
>
>
> Filter pushdown could be improved. Based on my experiments there's no 
> reasonable way to do it with DN 2.0, due to DN bug in substring and 
> Collection.get(int) not being implemented.
> With version as low as 2.1 we can use values.get on partition to extract 
> values to compare to. Type compatibility is an issue, but is easy for strings 
> and integral values.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5428) Direct SQL check fails during tests

2013-10-04 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786493#comment-13786493
 ] 

Sergey Shelukhin commented on HIVE-5428:


I ran the tests on my centos machine... all passed except a few tests in thrift 
and load_hdfs_file_with_space_in_the_name on Minimr, which are probably 
unrelated and passed on reruns

> Direct SQL check fails during tests
> ---
>
> Key: HIVE-5428
> URL: https://issues.apache.org/jira/browse/HIVE-5428
> Project: Hive
>  Issue Type: Bug
>Reporter: Brock Noland
>Assignee: Sergey Shelukhin
> Attachments: D13245.1.patch, D13245.2.patch, HIVE-5428.01.patch
>
>
> Noticed this while work on mavenization. If you run the following command
> {noformat}
> ant test -Dtestcase=TestCliDriver -Dqfile=udf_case.q -Dtest.silent=false
> {noformat}
> and look at the top of the logs you see the exception below. It looks like 
> something needs to be changed in the initialization order.
> {noformat}
> 2013-10-02 13:42:21,596 INFO  metastore.ObjectStore 
> (ObjectStore.java:initialize(243)) - ObjectStore, initialize called
> 2013-10-02 13:42:22,048 DEBUG bonecp.BoneCPDataSource 
> (BoneCPDataSource.java:maybeInit(148)) - JDBC URL = 
> jdbc:derby:;databaseName=../build/test/junit_metastore_db;create=true, 
> Username = APP, partitions = 1, max (per partition) = 0, min (per partition) 
> = 0, helper threads = 3, idle max age = 60 min, idle test period = 240 min
> 2013-10-02 13:42:22,051 WARN  bonecp.BoneCPConfig 
> (BoneCPConfig.java:sanitize(1537)) - Max Connections < 1. Setting to 20
> 2013-10-02 13:42:30,218 INFO  metastore.ObjectStore 
> (ObjectStore.java:getPMF(312)) - Setting MetaStore object pin classes with 
> hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order"
> 2013-10-02 13:42:30,253 DEBUG bonecp.BoneCPDataSource 
> (BoneCPDataSource.java:maybeInit(148)) - JDBC URL = 
> jdbc:derby:;databaseName=../build/test/junit_metastore_db;create=true, 
> Username = APP, partitions = 1, max (per partition) = 0, min (per partition) 
> = 0, helper threads = 3, idle max age = 60 min, idle test period = 240 min
> 2013-10-02 13:42:30,253 WARN  bonecp.BoneCPConfig 
> (BoneCPConfig.java:sanitize(1537)) - Max Connections < 1. Setting to 20
> 2013-10-02 13:42:30,262 INFO  metastore.MetaStoreDirectSql 
> (MetaStoreDirectSql.java:(99)) - MySQL check failed, assuming we are 
> not on mysql: Lexical error at line 1, column 5.  Encountered: "@" (64), 
> after : "".
> 2013-10-02 13:42:30,298 ERROR metastore.MetaStoreDirectSql 
> (MetaStoreDirectSql.java:(112)) - Self-test query [select "DB_ID" from 
> "DBS"] failed; direct SQL is disabled
> javax.jdo.JDODataStoreException: Error executing SQL query "select "DB_ID" 
> from "DBS"".
>   at 
> org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:451)
>   at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:230)
>   at 
> org.apache.hadoop.hive.metastore.MetaStoreDirectSql.(MetaStoreDirectSql.java:108)
>   at 
> org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:249)
>   at 
> org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:220)
>   at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:62)
>   at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:117)
>   at 
> org.apache.hadoop.hive.metastore.RetryingRawStore.(RetryingRawStore.java:62)
>   at 
> org.apache.hadoop.hive.metastore.RetryingRawStore.getProxy(RetryingRawStore.java:71)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:418)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:405)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:444)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:329)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.(HiveMetaStore.java:289)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:54)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:59)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore.newHMSHandler(HiveMetaStore.java:4084)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:126)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:52

[jira] [Updated] (HIVE-4888) listPartitionsByFilter doesn't support lt/gt/lte/gte

2013-10-04 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-4888:
---

Attachment: HIVE-4888.06.patch

typo when changing the test... test now passes for me

> listPartitionsByFilter doesn't support lt/gt/lte/gte
> 
>
> Key: HIVE-4888
> URL: https://issues.apache.org/jira/browse/HIVE-4888
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: D13101.1.patch, D13101.2.patch, D13101.3.patch, 
> D13101.4.patch, D13101.5.patch, HIVE-4888.00.patch, HIVE-4888.01.patch, 
> HIVE-4888.04.patch, HIVE-4888.05.patch, HIVE-4888.06.patch, 
> HIVE-4888.on-top-of-4914.patch
>
>
> Filter pushdown could be improved. Based on my experiments there's no 
> reasonable way to do it with DN 2.0, due to DN bug in substring and 
> Collection.get(int) not being implemented.
> With version as low as 2.1 we can use values.get on partition to extract 
> values to compare to. Type compatibility is an issue, but is easy for strings 
> and integral values.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5445) PTest2 should use testonly target

2013-10-04 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786492#comment-13786492
 ] 

Ashutosh Chauhan commented on HIVE-5445:


+1

> PTest2 should use testonly target
> -
>
> Key: HIVE-5445
> URL: https://issues.apache.org/jira/browse/HIVE-5445
> Project: Hive
>  Issue Type: Improvement
>Reporter: Brock Noland
>Assignee: Brock Noland
> Attachments: HIVE-5445.patch
>
>
> NO PRECOMMIT TESTS



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HIVE-5447) HiveServer2 should allow secure impersonation over LDAP or other non-kerberos connection

2013-10-04 Thread Prasad Mujumdar (JIRA)
Prasad Mujumdar created HIVE-5447:
-

 Summary: HiveServer2 should allow secure impersonation over LDAP 
or other non-kerberos connection
 Key: HIVE-5447
 URL: https://issues.apache.org/jira/browse/HIVE-5447
 Project: Hive
  Issue Type: Improvement
  Components: HiveServer2
Affects Versions: 0.11.0, 0.12.0
Reporter: Prasad Mujumdar
Assignee: Prasad Mujumdar


Currently the impersonation on a secure hadoop cluster only works when HS2 
connection itself is kerberos. This forces clients to configure kerberos which 
can be a deployment nightmare.
We should allow other authentications mechanism to perform secure impersonation.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: Review Request 14326: HIVE-4629. HS2 should support an API to retrieve query logs

2013-10-04 Thread Brock Noland

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/14326/#review26683
---


Shreepadma,

This looks like a very useful feature for clients of HS2! Indeed I wish all 
databases had this feature. The patch looks very good. I have noted a few items 
below, nothing major, basically a bunch of nits.

Brock


jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java


Let's add a test where we call getLog without any query.



jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java


Why even catch this if we rethrow immediately.



jdbc/src/test/org/apache/hive/jdbc/TestJdbcDriver2.java


I think we should append these two error messages with log.



service/if/TCLIService.thrift


Let's trim this trailing ws. (understand it's not yours.)



service/src/java/org/apache/hive/service/cli/CLIService.java


Should this be called in a finally block? Same question for the items below.



service/src/java/org/apache/hive/service/cli/CLIService.java


Is this useful at the INFO level? It's completely your call but I was just 
curious.



service/src/java/org/apache/hive/service/cli/log/LinkedStringBuffer.java


Looks like we only require the List interface on the LHS?



service/src/java/org/apache/hive/service/cli/log/LinkedStringBuffer.java


Internally we storing this as an int. Should the return type be a int or 
the internal type be a long?



service/src/java/org/apache/hive/service/cli/log/LogDivertAppender.java


When we drop an event, that means it's not stored in memory. Is it still 
logged to the HS2 log file?

Since we have a limit on the amount of data we store per log capture, I 
guess the question is, is all this data also sent to the HS2 log file?



service/src/java/org/apache/hive/service/cli/log/LogManager.java


The order of members for this class should be:

static final
final
non-final



service/src/java/org/apache/hive/service/cli/log/LogManager.java


These two should start with a lower case cahr



service/src/java/org/apache/hive/service/cli/log/LogManager.java


LOG should be final



service/src/java/org/apache/hive/service/cli/log/LogManager.java


The variable should start with a lowercase char. Same same below and above.



service/src/java/org/apache/hive/service/cli/log/LogManager.java


spelling



service/src/java/org/apache/hive/service/cli/log/LogManager.java


Slightly confusing. I would expect:

if(operationLog == nul) {
  if(createIfAbsent) {
doCreate();
  } else {
throw
  }
}



service/src/java/org/apache/hive/service/cli/log/LogManager.java


Operation is spelled wrong



service/src/java/org/apache/hive/service/cli/session/HiveSession.java


trailing ws



service/src/java/org/apache/hive/service/cli/session/HiveSessionImpl.java


Similar to above, should this be part of the finally?



service/src/java/org/apache/hive/service/cli/thrift/ThriftCLIService.java


I know we are printing stack traces to syserr elsewhere in this file but 
lets' use the logger?



service/src/java/org/apache/hive/service/cli/thrift/ThriftCLIServiceClient.java


What encoding should this be, UTF-8?


- Brock Noland


On Sept. 25, 2013, 12:08 a.m., Shreepadma Venugopalan wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/14326/
> ---
> 
> (Updated Sept. 25, 2013, 12:08 a.m.)
> 
> 
> Review request for hive and Brock Noland.
> 
> 
> Bugs: HIVE-4629
> https://issues.apache.org/jira/browse/HIVE-4629
> 
> 
> Repository: hive-git
> 
> 
> Description
> ---
> 
> Adds a new API to HS2, String getLog(OperationHandle opHandle) that returns 
> the query log for a 

[jira] [Updated] (HIVE-5391) make ORC predicate pushdown work with vectorization

2013-10-04 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-5391:
---

Status: Open  (was: Patch Available)

> make ORC predicate pushdown work with vectorization
> ---
>
> Key: HIVE-5391
> URL: https://issues.apache.org/jira/browse/HIVE-5391
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-5391.01.patch, HIVE-5391.01-vectorization.patch, 
> HIVE-5391.02.patch, HIVE-5391.03.patch, HIVE-5391.patch, 
> HIVE-5391-vectorization.patch
>
>
> Vectorized execution doesn't utilize ORC predicate pushdown. It should.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5391) make ORC predicate pushdown work with vectorization

2013-10-04 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-5391:
---

Status: Patch Available  (was: Open)

> make ORC predicate pushdown work with vectorization
> ---
>
> Key: HIVE-5391
> URL: https://issues.apache.org/jira/browse/HIVE-5391
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-5391.01.patch, HIVE-5391.01-vectorization.patch, 
> HIVE-5391.02.patch, HIVE-5391.03.patch, HIVE-5391.patch, 
> HIVE-5391-vectorization.patch
>
>
> Vectorized execution doesn't utilize ORC predicate pushdown. It should.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HIVE-5391) make ORC predicate pushdown work with vectorization

2013-10-04 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-5391:
---

Attachment: HIVE-5391.03.patch

rebase patch, there are conflicts

> make ORC predicate pushdown work with vectorization
> ---
>
> Key: HIVE-5391
> URL: https://issues.apache.org/jira/browse/HIVE-5391
> Project: Hive
>  Issue Type: Improvement
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-5391.01.patch, HIVE-5391.01-vectorization.patch, 
> HIVE-5391.02.patch, HIVE-5391.03.patch, HIVE-5391.patch, 
> HIVE-5391-vectorization.patch
>
>
> Vectorized execution doesn't utilize ORC predicate pushdown. It should.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5385) StringUtils is not in commons codec 1.3

2013-10-04 Thread Kousuke Saruta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786444#comment-13786444
 ] 

Kousuke Saruta commented on HIVE-5385:
--

[~yhuai] I think we should still keep 0.20.2 because there a little bit of 
problem of compatibility between 0.20.2 and 0.20.205+ ( e.g. 
UnixUserGroupInformation is no longer used in 0.20.205).
OK, I'll try to remove dependency of 1.3.

> StringUtils is not in commons codec 1.3
> ---
>
> Key: HIVE-5385
> URL: https://issues.apache.org/jira/browse/HIVE-5385
> Project: Hive
>  Issue Type: Bug
>Reporter: Yin Huai
>Assignee: Kousuke Saruta
>Priority: Trivial
> Attachments: HIVE-5385.1.patch
>
>
> In ThriftHttpServlet introduced by HIVE-4763, StringUtils is imported which 
> was introduced by commons codec 1.4. But, our 0.20 shims depends on commons 
> codec 1.3. Our eclipse classpath template is also using libs of 0.20 shims. 
> So, we will get two errors in eclipse. 
> Compiling hive will not have a problem because we are loading codec 1.4 for 
> project service (1.4 is also used when "-Dhadoop.version=0.20.2 
> -Dhadoop.mr.rev=20").



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HIVE-5443) Few hadoop2 .q.out needs to be updated

2013-10-04 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13786435#comment-13786435
 ] 

Brock Noland commented on HIVE-5443:


+1

> Few hadoop2 .q.out needs to be updated
> --
>
> Key: HIVE-5443
> URL: https://issues.apache.org/jira/browse/HIVE-5443
> Project: Hive
>  Issue Type: Task
>Affects Versions: 0.13.0
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-5443.2.patch, HIVE-5443.patch
>
>
> These hadoop2 only tests were not updated in HIVE-5223



--
This message was sent by Atlassian JIRA
(v6.1#6144)


  1   2   3   >