[jira] [Commented] (HIVE-14909) Preserve the "parent location" of the table when an "alter table rename to " is submitted (the case when the db location is not specified and the Hive

2016-10-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15597244#comment-15597244
 ] 

Hive QA commented on HIVE-14909:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834790/HIVE-14909.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 10 failed/errored test(s), 10564 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_view_rename] 
(batchId=33)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[authorization_8] 
(batchId=9)
org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver[hbase_bulk] 
(batchId=89)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic] 
(batchId=132)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[current_date_timestamp]
 (batchId=144)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[recursive_view] 
(batchId=83)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
 (batchId=164)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] (batchId=164)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] (batchId=164)
org.apache.hive.hcatalog.cli.TestSemanticAnalysis.testAlterTableRename 
(batchId=172)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1750/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1750/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1750/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 10 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834790 - PreCommit-HIVE-Build

> Preserve the "parent location" of the table when an "alter table  
> rename to " is submitted (the case when the db location is not 
> specified and the Hive defult db is outside the same encrypted zone).
> --
>
> Key: HIVE-14909
> URL: https://issues.apache.org/jira/browse/HIVE-14909
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 1.1.0
>Reporter: Adriano
>Assignee: Chaoyu Tang
> Attachments: HIVE-14909.patch, HIVE-14909.patch
>
>
> Alter Table operation for db_enc.rename_test failed to move data due to: 
> '/hdfs/encrypted_path/db_enc/rename_test can't be moved from an encryption 
> zone.'
> When Hive renames a managed table, it always creates the new renamed table 
> directory under its database directory in order to keep a db/table hierarchy. 
> In this case, the renamed table directory is created under "default db" 
> directory "hive/warehouse/". When Hive renames a managed table, it always 
> creates the new renamed table directory under its database directory in order 
> to keep a db/table hierarchy. In this case, the renamed table directory is 
> created under "default' db directory typically set as /hive/warehouse/ . 
> This error doesn't appear if first create a database which points to a 
> directory outside /hive/warehouse/, say '/hdfs/encrypted_path', you won't 
> have this problem. For example, 
> create database db_enc location '/hdfs/encrypted_path/db_enc; 
> use db_enc; 
> create table rename_test (...) location 
> '/hdfs/encrypted_path/db_enc/rename_test'; 
> alter table rename_test rename to test_rename; 
> The renamed test_rename directory is created under 
> /hdfs/encrypted_path/db_enc. 
> Considering that the encryption of a filesystem is part of the evolution 
> hardening of a system (where the system and the data contained can already 
> exists) and a db can be already created without location set (because it is 
> not strictly required)and the default db is outside the same encryption zone 
> (or in a no-encryption zone) the alter table rename operation will fail.
> Improvement:
> Preserve the "parent location" of the table when an "alter table  
> rename to " is submitted (the case when the db location is not 
> specified and the Hive defult db is outside the same encrypted zone).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14968) Fix compilation failure on branch-1

2016-10-21 Thread Daniel Dai (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15597175#comment-15597175
 ] 

Daniel Dai commented on HIVE-14968:
---

Seems the test failures are real. Here is one stack I saw:
{code}
java.lang.Exception: java.lang.RuntimeException: Error in configuring object
at 
org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at 
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
Caused by: java.lang.RuntimeException: Error in configuring object
at 
org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:109)
at 
org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
at 
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
at 
org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:409)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
at 
org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:106)
... 10 more
Caused by: java.lang.RuntimeException: Reduce operator initialization failed
at 
org.apache.hadoop.hive.ql.exec.mr.ExecReducer.configure(ExecReducer.java:157)
... 14 more
Caused by: java.lang.RuntimeException: Cannot find ExprNodeEvaluator for the 
exprNodeDesc = null
at 
org.apache.hadoop.hive.ql.exec.ExprNodeEvaluatorFactory.get(ExprNodeEvaluatorFactory.java:57)
at 
org.apache.hadoop.hive.ql.exec.GroupByOperator.initializeOp(GroupByOperator.java:272)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:363)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:482)
at 
org.apache.hadoop.hive.ql.exec.Operator.initializeChildren(Operator.java:439)
at org.apache.hadoop.hive.ql.exec.Operator.initialize(Operator.java:376)
at 
org.apache.hadoop.hive.ql.exec.mr.ExecReducer.configure(ExecReducer.java:150)
... 14 more
{code}

But since this Jira is for the compilation error, I will commit the patch and 
create a separate ticket for test fixes.

> Fix compilation failure on branch-1
> ---
>
> Key: HIVE-14968
> URL: https://issues.apache.org/jira/browse/HIVE-14968
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Fix For: 1.3.0
>
> Attachments: HIVE-14968-branch-1.1.patch, 
> HIVE-14968-branch-1.2.patch, HIVE-14968.1.patch, HIVE-14968.3-branch-1.patch
>
>
> branch-1 compilation failure due to:
> HIVE-14436: Hive 1.2.1/Hitting "ql.Driver: FAILED: IllegalArgumentException 
> Error: , expected at the end of 'decimal(9'" after enabling 
> hive.optimize.skewjoin and with MR engine
> HIVE-14483 : java.lang.ArrayIndexOutOfBoundsException 
> org.apache.orc.impl.TreeReaderFactory.commonReadByteArrays
> 1.2 branch is fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12812) Enable mapred.input.dir.recursive by default to support union with aggregate function

2016-10-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15597168#comment-15597168
 ] 

Hive QA commented on HIVE-12812:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834788/HIVE-12812.patch

{color:green}SUCCESS:{color} +1 due to 54 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 10564 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic] 
(batchId=132)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[current_date_timestamp]
 (batchId=144)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[root_dir_external_table]
 (batchId=81)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[list_bucket_dml_2] 
(batchId=97)
org.apache.hadoop.hive.thrift.TestHadoopAuthBridge23.testDelegationTokenSharedStore
 (batchId=216)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
 (batchId=164)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] (batchId=164)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] (batchId=164)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1749/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1749/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1749/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 8 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834788 - PreCommit-HIVE-Build

> Enable mapred.input.dir.recursive by default to support union with aggregate 
> function
> -
>
> Key: HIVE-12812
> URL: https://issues.apache.org/jira/browse/HIVE-12812
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.2.1, 2.1.0
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Attachments: HIVE-12812.patch, HIVE-12812.patch, HIVE-12812.patch
>
>
> When union remove optimization is enabled, union query with aggregate 
> function writes its subquery intermediate results to subdirs which needs 
> mapred.input.dir.recursive to be enabled in order to be fetched. This 
> property is not defined by default in Hive and often ignored by user, which 
> causes the query failure and is hard to be debugged.
> So we need set mapred.input.dir.recursive to true whenever union remove 
> optimization is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15034) Fix orc_ppd_basic & current_date_timestamp tests

2016-10-21 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-15034:
---
Status: Patch Available  (was: Open)

> Fix orc_ppd_basic & current_date_timestamp tests
> 
>
> Key: HIVE-15034
> URL: https://issues.apache.org/jira/browse/HIVE-15034
> Project: Hive
>  Issue Type: Test
>  Components: Test
>Reporter: Vineet Garg
>Assignee: Vineet Garg
> Attachments: HIVE-15034.1.patch
>
>
> Started failing following HIVE-14913's failure



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15034) Fix orc_ppd_basic & current_date_timestamp tests

2016-10-21 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-15034:
---
Attachment: HIVE-15034.1.patch

> Fix orc_ppd_basic & current_date_timestamp tests
> 
>
> Key: HIVE-15034
> URL: https://issues.apache.org/jira/browse/HIVE-15034
> Project: Hive
>  Issue Type: Test
>  Components: Test
>Reporter: Vineet Garg
>Assignee: Vineet Garg
> Attachments: HIVE-15034.1.patch
>
>
> Started failing following HIVE-14913's failure



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14909) Preserve the "parent location" of the table when an "alter table rename to " is submitted (the case when the db location is not specified and the Hive de

2016-10-21 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-14909:
---
Attachment: HIVE-14909.patch

All four new failed tests were caused by the errors like:
{code}
2016-10-21T19:08:51,810 ERROR [ccc0909e-6764-441b-9eee-10b73d4ca198 main] 
exec.DDLTask: org.apache.hadoop.hive.ql.metadata.HiveException: Unable to alter 
table. java.lang.IllegalArgumentException: Can not create a Path from a null 
string
at org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:620)
at org.apache.hadoop.hive.ql.exec.DDLTask.alterTable(DDLTask.java:3373)
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:376)
{code}
But interestingly, I was not able to reproduce the issue in my local env. 
Reattach the patch to kick off another run to see if they can be reproduced.

> Preserve the "parent location" of the table when an "alter table  
> rename to " is submitted (the case when the db location is not 
> specified and the Hive defult db is outside the same encrypted zone).
> --
>
> Key: HIVE-14909
> URL: https://issues.apache.org/jira/browse/HIVE-14909
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 1.1.0
>Reporter: Adriano
>Assignee: Chaoyu Tang
> Attachments: HIVE-14909.patch, HIVE-14909.patch
>
>
> Alter Table operation for db_enc.rename_test failed to move data due to: 
> '/hdfs/encrypted_path/db_enc/rename_test can't be moved from an encryption 
> zone.'
> When Hive renames a managed table, it always creates the new renamed table 
> directory under its database directory in order to keep a db/table hierarchy. 
> In this case, the renamed table directory is created under "default db" 
> directory "hive/warehouse/". When Hive renames a managed table, it always 
> creates the new renamed table directory under its database directory in order 
> to keep a db/table hierarchy. In this case, the renamed table directory is 
> created under "default' db directory typically set as /hive/warehouse/ . 
> This error doesn't appear if first create a database which points to a 
> directory outside /hive/warehouse/, say '/hdfs/encrypted_path', you won't 
> have this problem. For example, 
> create database db_enc location '/hdfs/encrypted_path/db_enc; 
> use db_enc; 
> create table rename_test (...) location 
> '/hdfs/encrypted_path/db_enc/rename_test'; 
> alter table rename_test rename to test_rename; 
> The renamed test_rename directory is created under 
> /hdfs/encrypted_path/db_enc. 
> Considering that the encryption of a filesystem is part of the evolution 
> hardening of a system (where the system and the data contained can already 
> exists) and a db can be already created without location set (because it is 
> not strictly required)and the default db is outside the same encryption zone 
> (or in a no-encryption zone) the alter table rename operation will fail.
> Improvement:
> Preserve the "parent location" of the table when an "alter table  
> rename to " is submitted (the case when the db location is not 
> specified and the Hive defult db is outside the same encrypted zone).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12765) Support Intersect (distinct/all) Except (distinct/all) Minus (distinct/all)

2016-10-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15597086#comment-15597086
 ] 

Hive QA commented on HIVE-12765:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834786/HIVE-12765.08.patch

{color:green}SUCCESS:{color} +1 due to 6 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 10570 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[index_auto_mult_tables_compact]
 (batchId=32)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic] 
(batchId=131)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[current_date_timestamp]
 (batchId=144)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
 (batchId=164)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] (batchId=164)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] (batchId=164)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1748/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1748/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1748/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834786 - PreCommit-HIVE-Build

> Support Intersect (distinct/all) Except (distinct/all) Minus (distinct/all)
> ---
>
> Key: HIVE-12765
> URL: https://issues.apache.org/jira/browse/HIVE-12765
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12765.01.patch, HIVE-12765.02.patch, 
> HIVE-12765.03.patch, HIVE-12765.04.patch, HIVE-12765.05.patch, 
> HIVE-12765.06.patch, HIVE-12765.07.patch, HIVE-12765.08.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15016) Run tests with Hadoop 3.0.0-alpha1

2016-10-21 Thread Peter Vary (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15597076#comment-15597076
 ] 

Peter Vary commented on HIVE-15016:
---

This HBase bug could be a problem:
HBASE-15812

Fixed only in 2.0 (at least for now)

It effects every itest/*HBase* testcases, since the HBase server does not start.

> Run tests with Hadoop 3.0.0-alpha1
> --
>
> Key: HIVE-15016
> URL: https://issues.apache.org/jira/browse/HIVE-15016
> Project: Hive
>  Issue Type: Task
>  Components: Hive
>Reporter: Sergio Peña
>Assignee: Sergio Peña
>
> Hadoop 3.0.0-alpha1 was released back on Sep/16 to allow other components run 
> tests against this new version before GA.
> We should start running tests with Hive to validate compatibility against 
> Hadoop 3.0.
> NOTE: The patch used to test must not be committed to Hive until Hadoop 3.0 
> GA is released.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14797) reducer number estimating may lead to data skew

2016-10-21 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15597040#comment-15597040
 ] 

Rui Li commented on HIVE-14797:
---

[~roncenzhao], would you mind update the patch as Xuefu suggested? Or let us 
know if you have better ideas.

> reducer number estimating may lead to data skew
> ---
>
> Key: HIVE-14797
> URL: https://issues.apache.org/jira/browse/HIVE-14797
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Reporter: roncenzhao
>Assignee: roncenzhao
> Attachments: HIVE-14797.2.patch, HIVE-14797.3.patch, 
> HIVE-14797.4.patch, HIVE-14797.patch
>
>
> HiveKey's hash code is generated by multipling by 31 key by key which is 
> implemented in method `ObjectInspectorUtils.getBucketHashCode()`:
> for (int i = 0; i < bucketFields.length; i++) {
>   int fieldHash = ObjectInspectorUtils.hashCode(bucketFields[i], 
> bucketFieldInspectors[i]);
>   hashCode = 31 * hashCode + fieldHash;
> }
> The follow example will lead to data skew:
> I hava two table called tbl1 and tbl2 and they have the same column: a int, b 
> string. The values of column 'a' in both two tables are not skew, but values 
> of column 'b' in both two tables are skew.
> When my sql is "select * from tbl1 join tbl2 on tbl1.a=tbl2.a and 
> tbl1.b=tbl2.b" and the estimated reducer number is 31, it will lead to data 
> skew.
> As we know, the HiveKey's hash code is generated by `hash(a)*31 + hash(b)`. 
> When reducer number is 31 the reducer No. of each row is `hash(b)%31`. In the 
> result, the job will be skew.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14913) Add new unit tests

2016-10-21 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15597025#comment-15597025
 ] 

Vineet Garg commented on HIVE-14913:


Yeh I'll create a new jira. How do I see the test failures log?

> Add new unit tests
> --
>
> Key: HIVE-14913
> URL: https://issues.apache.org/jira/browse/HIVE-14913
> Project: Hive
>  Issue Type: Task
>  Components: Tests
>Reporter: Vineet Garg
>Assignee: Vineet Garg
> Fix For: 2.2.0
>
> Attachments: HIVE-14913.1.patch, HIVE-14913.2.patch, 
> HIVE-14913.3.patch, HIVE-14913.4.patch, HIVE-14913.5.patch, 
> HIVE-14913.6.patch, HIVE-14913.7.patch, HIVE-14913.8.patch, HIVE-14913.9.patch
>
>
> Moving bunch of tests from system test to hive unit tests to reduce testing 
> overhead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12765) Support Intersect (distinct/all) Except (distinct/all) Minus (distinct/all)

2016-10-21 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15597014#comment-15597014
 ] 

Ashutosh Chauhan commented on HIVE-12765:
-

+1 pending tests

> Support Intersect (distinct/all) Except (distinct/all) Minus (distinct/all)
> ---
>
> Key: HIVE-12765
> URL: https://issues.apache.org/jira/browse/HIVE-12765
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12765.01.patch, HIVE-12765.02.patch, 
> HIVE-12765.03.patch, HIVE-12765.04.patch, HIVE-12765.05.patch, 
> HIVE-12765.06.patch, HIVE-12765.07.patch, HIVE-12765.08.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12812) Enable mapred.input.dir.recursive by default to support union with aggregate function

2016-10-21 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-12812:
---
Attachment: HIVE-12812.patch

> Enable mapred.input.dir.recursive by default to support union with aggregate 
> function
> -
>
> Key: HIVE-12812
> URL: https://issues.apache.org/jira/browse/HIVE-12812
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.2.1, 2.1.0
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Attachments: HIVE-12812.patch, HIVE-12812.patch, HIVE-12812.patch
>
>
> When union remove optimization is enabled, union query with aggregate 
> function writes its subquery intermediate results to subdirs which needs 
> mapred.input.dir.recursive to be enabled in order to be fetched. This 
> property is not defined by default in Hive and often ignored by user, which 
> causes the query failure and is hard to be debugged.
> So we need set mapred.input.dir.recursive to true whenever union remove 
> optimization is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14913) Add new unit tests

2016-10-21 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15597020#comment-15597020
 ] 

Ashutosh Chauhan commented on HIVE-14913:
-

These tests are still failing. [~vgarg] Can you create a new jira and upload a 
patch and trigger a run to get clean run?

> Add new unit tests
> --
>
> Key: HIVE-14913
> URL: https://issues.apache.org/jira/browse/HIVE-14913
> Project: Hive
>  Issue Type: Task
>  Components: Tests
>Reporter: Vineet Garg
>Assignee: Vineet Garg
> Fix For: 2.2.0
>
> Attachments: HIVE-14913.1.patch, HIVE-14913.2.patch, 
> HIVE-14913.3.patch, HIVE-14913.4.patch, HIVE-14913.5.patch, 
> HIVE-14913.6.patch, HIVE-14913.7.patch, HIVE-14913.8.patch, HIVE-14913.9.patch
>
>
> Moving bunch of tests from system test to hive unit tests to reduce testing 
> overhead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14609) HS2 cannot drop a function whose associated jar file has been removed

2016-10-21 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-14609:
---
Attachment: HIVE-12812.patch

> HS2 cannot drop a function whose associated jar file has been removed
> -
>
> Key: HIVE-14609
> URL: https://issues.apache.org/jira/browse/HIVE-14609
> Project: Hive
>  Issue Type: Bug
>Reporter: Yibing Shi
>Assignee: Chaoyu Tang
> Attachments: HIVE-12812.patch
>
>
> Create a permanent function with below command:
> {code:sql}
> create function yshi.dummy as 'com.yshi.hive.udf.DummyUDF' using jar 
> 'hdfs://host-10-17-81-142.coe.cloudera.com:8020/hive/jars/yshi.jar';
> {code}
> After that, delete the HDFS file 
> {{hdfs://host-10-17-81-142.coe.cloudera.com:8020/hive/jars/yshi.jar}}, and 
> *restart HS2 to remove the loaded class*.
> Now the function cannot be dropped:
> {noformat}
> 0: jdbc:hive2://10.17.81.144:1/default> show functions yshi.dummy;
> INFO  : Compiling 
> command(queryId=hive_20160821213434_d0271d77-84d8-45ba-8d92-3da1c143bded): 
> show functions yshi.dummy
> INFO  : Semantic Analysis Completed
> INFO  : Returning Hive schema: 
> Schema(fieldSchemas:[FieldSchema(name:tab_name, type:string, comment:from 
> deserializer)], properties:null)
> INFO  : Completed compiling 
> command(queryId=hive_20160821213434_d0271d77-84d8-45ba-8d92-3da1c143bded); 
> Time taken: 1.259 seconds
> INFO  : Executing 
> command(queryId=hive_20160821213434_d0271d77-84d8-45ba-8d92-3da1c143bded): 
> show functions yshi.dummy
> INFO  : Starting task [Stage-0:DDL] in serial mode
> INFO  : SHOW FUNCTIONS is deprecated, please use SHOW FUNCTIONS LIKE instead.
> INFO  : Completed executing 
> command(queryId=hive_20160821213434_d0271d77-84d8-45ba-8d92-3da1c143bded); 
> Time taken: 0.024 seconds
> INFO  : OK
> +-+--+
> |  tab_name   |
> +-+--+
> | yshi.dummy  |
> +-+--+
> 1 row selected (3.877 seconds)
> 0: jdbc:hive2://10.17.81.144:1/default> drop function yshi.dummy;
> INFO  : Compiling 
> command(queryId=hive_20160821213434_47d14df5-59b3-4ebc-9a48-5e1d9c60c1fc): 
> drop function yshi.dummy
> INFO  : converting to local 
> hdfs://host-10-17-81-142.coe.cloudera.com:8020/hive/jars/yshi.jar
> ERROR : Failed to read external resource 
> hdfs://host-10-17-81-142.coe.cloudera.com:8020/hive/jars/yshi.jar
> java.lang.RuntimeException: Failed to read external resource 
> hdfs://host-10-17-81-142.coe.cloudera.com:8020/hive/jars/yshi.jar
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.downloadResource(SessionState.java:1200)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.add_resources(SessionState.java:1136)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.add_resources(SessionState.java:1126)
>   at 
> org.apache.hadoop.hive.ql.exec.FunctionTask.addFunctionResources(FunctionTask.java:304)
>   at 
> org.apache.hadoop.hive.ql.exec.Registry.registerToSessionRegistry(Registry.java:470)
>   at 
> org.apache.hadoop.hive.ql.exec.Registry.getQualifiedFunctionInfo(Registry.java:456)
>   at 
> org.apache.hadoop.hive.ql.exec.Registry.getFunctionInfo(Registry.java:245)
>   at 
> org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfo(FunctionRegistry.java:455)
>   at 
> org.apache.hadoop.hive.ql.parse.FunctionSemanticAnalyzer.analyzeDropFunction(FunctionSemanticAnalyzer.java:99)
>   at 
> org.apache.hadoop.hive.ql.parse.FunctionSemanticAnalyzer.analyzeInternal(FunctionSemanticAnalyzer.java:61)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:222)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:451)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:311)
>   at 
> org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1194)
>   at 
> org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1181)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:134)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:206)
>   at 
> org.apache.hive.service.cli.operation.Operation.run(Operation.java:316)
>   at 
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:425)
>   at 
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:401)
>   at 
> org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:258)
>   at 
> 

[jira] [Updated] (HIVE-12765) Support Intersect (distinct/all) Except (distinct/all) Minus (distinct/all)

2016-10-21 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-12765:
---
Status: Patch Available  (was: Open)

> Support Intersect (distinct/all) Except (distinct/all) Minus (distinct/all)
> ---
>
> Key: HIVE-12765
> URL: https://issues.apache.org/jira/browse/HIVE-12765
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12765.01.patch, HIVE-12765.02.patch, 
> HIVE-12765.03.patch, HIVE-12765.04.patch, HIVE-12765.05.patch, 
> HIVE-12765.06.patch, HIVE-12765.07.patch, HIVE-12765.08.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14609) HS2 cannot drop a function whose associated jar file has been removed

2016-10-21 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-14609:
---
Attachment: (was: HIVE-12812.patch)

> HS2 cannot drop a function whose associated jar file has been removed
> -
>
> Key: HIVE-14609
> URL: https://issues.apache.org/jira/browse/HIVE-14609
> Project: Hive
>  Issue Type: Bug
>Reporter: Yibing Shi
>Assignee: Chaoyu Tang
>
> Create a permanent function with below command:
> {code:sql}
> create function yshi.dummy as 'com.yshi.hive.udf.DummyUDF' using jar 
> 'hdfs://host-10-17-81-142.coe.cloudera.com:8020/hive/jars/yshi.jar';
> {code}
> After that, delete the HDFS file 
> {{hdfs://host-10-17-81-142.coe.cloudera.com:8020/hive/jars/yshi.jar}}, and 
> *restart HS2 to remove the loaded class*.
> Now the function cannot be dropped:
> {noformat}
> 0: jdbc:hive2://10.17.81.144:1/default> show functions yshi.dummy;
> INFO  : Compiling 
> command(queryId=hive_20160821213434_d0271d77-84d8-45ba-8d92-3da1c143bded): 
> show functions yshi.dummy
> INFO  : Semantic Analysis Completed
> INFO  : Returning Hive schema: 
> Schema(fieldSchemas:[FieldSchema(name:tab_name, type:string, comment:from 
> deserializer)], properties:null)
> INFO  : Completed compiling 
> command(queryId=hive_20160821213434_d0271d77-84d8-45ba-8d92-3da1c143bded); 
> Time taken: 1.259 seconds
> INFO  : Executing 
> command(queryId=hive_20160821213434_d0271d77-84d8-45ba-8d92-3da1c143bded): 
> show functions yshi.dummy
> INFO  : Starting task [Stage-0:DDL] in serial mode
> INFO  : SHOW FUNCTIONS is deprecated, please use SHOW FUNCTIONS LIKE instead.
> INFO  : Completed executing 
> command(queryId=hive_20160821213434_d0271d77-84d8-45ba-8d92-3da1c143bded); 
> Time taken: 0.024 seconds
> INFO  : OK
> +-+--+
> |  tab_name   |
> +-+--+
> | yshi.dummy  |
> +-+--+
> 1 row selected (3.877 seconds)
> 0: jdbc:hive2://10.17.81.144:1/default> drop function yshi.dummy;
> INFO  : Compiling 
> command(queryId=hive_20160821213434_47d14df5-59b3-4ebc-9a48-5e1d9c60c1fc): 
> drop function yshi.dummy
> INFO  : converting to local 
> hdfs://host-10-17-81-142.coe.cloudera.com:8020/hive/jars/yshi.jar
> ERROR : Failed to read external resource 
> hdfs://host-10-17-81-142.coe.cloudera.com:8020/hive/jars/yshi.jar
> java.lang.RuntimeException: Failed to read external resource 
> hdfs://host-10-17-81-142.coe.cloudera.com:8020/hive/jars/yshi.jar
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.downloadResource(SessionState.java:1200)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.add_resources(SessionState.java:1136)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.add_resources(SessionState.java:1126)
>   at 
> org.apache.hadoop.hive.ql.exec.FunctionTask.addFunctionResources(FunctionTask.java:304)
>   at 
> org.apache.hadoop.hive.ql.exec.Registry.registerToSessionRegistry(Registry.java:470)
>   at 
> org.apache.hadoop.hive.ql.exec.Registry.getQualifiedFunctionInfo(Registry.java:456)
>   at 
> org.apache.hadoop.hive.ql.exec.Registry.getFunctionInfo(Registry.java:245)
>   at 
> org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionInfo(FunctionRegistry.java:455)
>   at 
> org.apache.hadoop.hive.ql.parse.FunctionSemanticAnalyzer.analyzeDropFunction(FunctionSemanticAnalyzer.java:99)
>   at 
> org.apache.hadoop.hive.ql.parse.FunctionSemanticAnalyzer.analyzeInternal(FunctionSemanticAnalyzer.java:61)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:222)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:451)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:311)
>   at 
> org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1194)
>   at 
> org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1181)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:134)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:206)
>   at 
> org.apache.hive.service.cli.operation.Operation.run(Operation.java:316)
>   at 
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:425)
>   at 
> org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:401)
>   at 
> org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:258)
>   at 
> org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:506)
> 

[jira] [Commented] (HIVE-14909) Preserve the "parent location" of the table when an "alter table rename to " is submitted (the case when the db location is not specified and the Hive

2016-10-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15597009#comment-15597009
 ] 

Hive QA commented on HIVE-14909:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834785/HIVE-14909.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 10564 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[alter_view_rename] 
(batchId=33)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[authorization_8] 
(batchId=9)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic] 
(batchId=132)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[current_date_timestamp]
 (batchId=144)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[recursive_view] 
(batchId=83)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
 (batchId=164)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] (batchId=164)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] (batchId=164)
org.apache.hive.hcatalog.cli.TestSemanticAnalysis.testAlterTableRename 
(batchId=172)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1747/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1747/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1747/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 9 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834785 - PreCommit-HIVE-Build

> Preserve the "parent location" of the table when an "alter table  
> rename to " is submitted (the case when the db location is not 
> specified and the Hive defult db is outside the same encrypted zone).
> --
>
> Key: HIVE-14909
> URL: https://issues.apache.org/jira/browse/HIVE-14909
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 1.1.0
>Reporter: Adriano
>Assignee: Chaoyu Tang
> Attachments: HIVE-14909.patch
>
>
> Alter Table operation for db_enc.rename_test failed to move data due to: 
> '/hdfs/encrypted_path/db_enc/rename_test can't be moved from an encryption 
> zone.'
> When Hive renames a managed table, it always creates the new renamed table 
> directory under its database directory in order to keep a db/table hierarchy. 
> In this case, the renamed table directory is created under "default db" 
> directory "hive/warehouse/". When Hive renames a managed table, it always 
> creates the new renamed table directory under its database directory in order 
> to keep a db/table hierarchy. In this case, the renamed table directory is 
> created under "default' db directory typically set as /hive/warehouse/ . 
> This error doesn't appear if first create a database which points to a 
> directory outside /hive/warehouse/, say '/hdfs/encrypted_path', you won't 
> have this problem. For example, 
> create database db_enc location '/hdfs/encrypted_path/db_enc; 
> use db_enc; 
> create table rename_test (...) location 
> '/hdfs/encrypted_path/db_enc/rename_test'; 
> alter table rename_test rename to test_rename; 
> The renamed test_rename directory is created under 
> /hdfs/encrypted_path/db_enc. 
> Considering that the encryption of a filesystem is part of the evolution 
> hardening of a system (where the system and the data contained can already 
> exists) and a db can be already created without location set (because it is 
> not strictly required)and the default db is outside the same encryption zone 
> (or in a no-encryption zone) the alter table rename operation will fail.
> Improvement:
> Preserve the "parent location" of the table when an "alter table  
> rename to " is submitted (the case when the db location is not 
> specified and the Hive defult db is outside the same encrypted zone).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12765) Support Intersect (distinct/all) Except (distinct/all) Minus (distinct/all)

2016-10-21 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-12765:
---
Status: Open  (was: Patch Available)

> Support Intersect (distinct/all) Except (distinct/all) Minus (distinct/all)
> ---
>
> Key: HIVE-12765
> URL: https://issues.apache.org/jira/browse/HIVE-12765
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12765.01.patch, HIVE-12765.02.patch, 
> HIVE-12765.03.patch, HIVE-12765.04.patch, HIVE-12765.05.patch, 
> HIVE-12765.06.patch, HIVE-12765.07.patch, HIVE-12765.08.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12765) Support Intersect (distinct/all) Except (distinct/all) Minus (distinct/all)

2016-10-21 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-12765:
---
Attachment: HIVE-12765.08.patch

> Support Intersect (distinct/all) Except (distinct/all) Minus (distinct/all)
> ---
>
> Key: HIVE-12765
> URL: https://issues.apache.org/jira/browse/HIVE-12765
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12765.01.patch, HIVE-12765.02.patch, 
> HIVE-12765.03.patch, HIVE-12765.04.patch, HIVE-12765.05.patch, 
> HIVE-12765.06.patch, HIVE-12765.07.patch, HIVE-12765.08.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14993) make WriteEntity distinguish writeType

2016-10-21 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-14993:
--
Issue Type: Sub-task  (was: Bug)
Parent: HIVE-10924

> make WriteEntity distinguish writeType
> --
>
> Key: HIVE-14993
> URL: https://issues.apache.org/jira/browse/HIVE-14993
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-14993.2.patch, HIVE-14993.3.patch, 
> HIVE-14993.4.patch, HIVE-14993.5.patch, HIVE-14993.6.patch, 
> HIVE-14993.7.patch, HIVE-14993.8.patch, HIVE-14993.patch, 
> debug.not2checkin.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-12765) Support Intersect (distinct/all) Except (distinct/all) Minus (distinct/all)

2016-10-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596935#comment-15596935
 ] 

Hive QA commented on HIVE-12765:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834782/HIVE-12765.07.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1746/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1746/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1746/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2016-10-22 01:57:14.200
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-1746/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2016-10-22 01:57:14.202
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 6cca991 HIVE-14913 : addendum patch
+ git clean -f -d
Removing common/src/java/org/apache/hadoop/hive/common/auth/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 6cca991 HIVE-14913 : addendum patch
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2016-10-22 01:57:15.144
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
Going to apply patch with: patch -p1
patching file itests/src/test/resources/testconfiguration.properties
patching file ql/src/java/org/apache/hadoop/hive/ql/ErrorMsg.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveCalciteUtil.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveRelFactories.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/reloperators/HiveExcept.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/reloperators/HiveIntersect.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveExceptRewriteRule.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveIntersectMergeRule.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveIntersectRewriteRule.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveProjectOverIntersectRemoveRule.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveSortLimitPullUpConstantsRule.java
patching file 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/translator/ASTConverter.java
patching file ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java
patching file ql/src/java/org/apache/hadoop/hive/ql/parse/HiveLexer.g
patching file ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g
patching file ql/src/java/org/apache/hadoop/hive/ql/parse/IdentifiersParser.g
patching file ql/src/java/org/apache/hadoop/hive/ql/parse/QBExpr.java
patching file ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java
patching file 
ql/src/test/org/apache/hadoop/hive/ql/parse/TestSQL11ReservedKeyWordsNegative.java
patching file ql/src/test/queries/clientpositive/except_all.q
patching file ql/src/test/queries/clientpositive/except_distinct.q
patching file ql/src/test/queries/clientpositive/intersect_all.q
patching file ql/src/test/queries/clientpositive/intersect_distinct.q
patching file ql/src/test/queries/clientpositive/intersect_merge.q
patching file ql/src/test/results/clientpositive/except_all.q.out
patching file 

[jira] [Commented] (HIVE-15025) Secure-Socket-Layer (SSL) support for HMS

2016-10-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596930#comment-15596930
 ] 

Hive QA commented on HIVE-15025:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834776/HIVE-15025.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 10564 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic] 
(batchId=132)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[current_date_timestamp]
 (batchId=144)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
 (batchId=164)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] (batchId=164)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] (batchId=164)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1745/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1745/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1745/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834776 - PreCommit-HIVE-Build

> Secure-Socket-Layer (SSL) support for HMS
> -
>
> Key: HIVE-15025
> URL: https://issues.apache.org/jira/browse/HIVE-15025
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 2.2.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-15025.1.patch, HIVE-15025.2.patch
>
>
> HMS server should support SSL encryption. When the server is keberos enabled, 
> the encryption can be enabled. But if keberos is not enabled, then there is 
> no encryption between HS2 and HMS. 
> Similar to HS2, we should support encryption in both cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14909) Preserve the "parent location" of the table when an "alter table rename to " is submitted (the case when the db location is not specified and the Hive de

2016-10-21 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-14909:
---
Status: Patch Available  (was: Open)

> Preserve the "parent location" of the table when an "alter table  
> rename to " is submitted (the case when the db location is not 
> specified and the Hive defult db is outside the same encrypted zone).
> --
>
> Key: HIVE-14909
> URL: https://issues.apache.org/jira/browse/HIVE-14909
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 1.1.0
>Reporter: Adriano
>Assignee: Chaoyu Tang
> Attachments: HIVE-14909.patch
>
>
> Alter Table operation for db_enc.rename_test failed to move data due to: 
> '/hdfs/encrypted_path/db_enc/rename_test can't be moved from an encryption 
> zone.'
> When Hive renames a managed table, it always creates the new renamed table 
> directory under its database directory in order to keep a db/table hierarchy. 
> In this case, the renamed table directory is created under "default db" 
> directory "hive/warehouse/". When Hive renames a managed table, it always 
> creates the new renamed table directory under its database directory in order 
> to keep a db/table hierarchy. In this case, the renamed table directory is 
> created under "default' db directory typically set as /hive/warehouse/ . 
> This error doesn't appear if first create a database which points to a 
> directory outside /hive/warehouse/, say '/hdfs/encrypted_path', you won't 
> have this problem. For example, 
> create database db_enc location '/hdfs/encrypted_path/db_enc; 
> use db_enc; 
> create table rename_test (...) location 
> '/hdfs/encrypted_path/db_enc/rename_test'; 
> alter table rename_test rename to test_rename; 
> The renamed test_rename directory is created under 
> /hdfs/encrypted_path/db_enc. 
> Considering that the encryption of a filesystem is part of the evolution 
> hardening of a system (where the system and the data contained can already 
> exists) and a db can be already created without location set (because it is 
> not strictly required)and the default db is outside the same encryption zone 
> (or in a no-encryption zone) the alter table rename operation will fail.
> Improvement:
> Preserve the "parent location" of the table when an "alter table  
> rename to " is submitted (the case when the db location is not 
> specified and the Hive defult db is outside the same encrypted zone).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14909) Preserve the "parent location" of the table when an "alter table rename to " is submitted (the case when the db location is not specified and the Hive de

2016-10-21 Thread Chaoyu Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chaoyu Tang updated HIVE-14909:
---
Attachment: HIVE-14909.patch

Since Hive supports the DDL like: 
create table foo (key int) location 'path_to_location' 
to create a managed table by specifying its location rather than that under its 
database. So table rename should respect this specified location, and not 
change its location or move its data. Its location should be change using a 
different command 'alter table .. set location ...' instead.

> Preserve the "parent location" of the table when an "alter table  
> rename to " is submitted (the case when the db location is not 
> specified and the Hive defult db is outside the same encrypted zone).
> --
>
> Key: HIVE-14909
> URL: https://issues.apache.org/jira/browse/HIVE-14909
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 1.1.0
>Reporter: Adriano
>Assignee: Chaoyu Tang
> Attachments: HIVE-14909.patch
>
>
> Alter Table operation for db_enc.rename_test failed to move data due to: 
> '/hdfs/encrypted_path/db_enc/rename_test can't be moved from an encryption 
> zone.'
> When Hive renames a managed table, it always creates the new renamed table 
> directory under its database directory in order to keep a db/table hierarchy. 
> In this case, the renamed table directory is created under "default db" 
> directory "hive/warehouse/". When Hive renames a managed table, it always 
> creates the new renamed table directory under its database directory in order 
> to keep a db/table hierarchy. In this case, the renamed table directory is 
> created under "default' db directory typically set as /hive/warehouse/ . 
> This error doesn't appear if first create a database which points to a 
> directory outside /hive/warehouse/, say '/hdfs/encrypted_path', you won't 
> have this problem. For example, 
> create database db_enc location '/hdfs/encrypted_path/db_enc; 
> use db_enc; 
> create table rename_test (...) location 
> '/hdfs/encrypted_path/db_enc/rename_test'; 
> alter table rename_test rename to test_rename; 
> The renamed test_rename directory is created under 
> /hdfs/encrypted_path/db_enc. 
> Considering that the encryption of a filesystem is part of the evolution 
> hardening of a system (where the system and the data contained can already 
> exists) and a db can be already created without location set (because it is 
> not strictly required)and the default db is outside the same encryption zone 
> (or in a no-encryption zone) the alter table rename operation will fail.
> Improvement:
> Preserve the "parent location" of the table when an "alter table  
> rename to " is submitted (the case when the db location is not 
> specified and the Hive defult db is outside the same encrypted zone).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14953) don't use globStatus on S3 in MM tables

2016-10-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596859#comment-15596859
 ] 

Hive QA commented on HIVE-14953:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834774/HIVE-14953.01.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1744/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1744/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1744/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2016-10-22 01:08:14.032
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-1744/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2016-10-22 01:08:14.034
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 6cca991 HIVE-14913 : addendum patch
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 6cca991 HIVE-14913 : addendum patch
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2016-10-22 01:08:14.908
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: common/src/java/org/apache/hadoop/hive/common/ValidWriteIds.java: No 
such file or directory
error: patch failed: 
common/src/java/org/apache/hadoop/hive/conf/HiveConf.java:3141
error: common/src/java/org/apache/hadoop/hive/conf/HiveConf.java: patch does 
not apply
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java:85
error: ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java: patch does 
not apply
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java:1589
error: ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java: patch does not 
apply
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834774 - PreCommit-HIVE-Build

> don't use globStatus on S3 in MM tables
> ---
>
> Key: HIVE-14953
> URL: https://issues.apache.org/jira/browse/HIVE-14953
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Rajesh Balamohan
>Assignee: Sergey Shelukhin
> Fix For: hive-14535
>
> Attachments: HIVE-14953.01.patch, HIVE-14953.patch
>
>
> Need to investigate if recursive get is faster. Also, normal listStatus might 
> suffice because MM code handles directory structure in a more definite manner 
> than old code; so it knows where the files of interest are to be found.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14803) S3: Stats gathering for insert queries can be expensive for partitioned dataset

2016-10-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596855#comment-15596855
 ] 

Hive QA commented on HIVE-14803:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834626/HIVE-14803.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 40 failed/errored test(s), 10564 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] 
(batchId=47)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] 
(batchId=11)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[deleteAnalyze] 
(batchId=28)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_map_ppr_multi_distinct]
 (batchId=45)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[index_auto_unused] 
(batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input_part5] (batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input_part7] (batchId=15)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=56)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_dyn_part1] 
(batchId=75)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_dyn_part8] 
(batchId=59)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[load_dyn_part9] 
(batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_mapjoin] 
(batchId=45)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_subquery] 
(batchId=45)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_1] (batchId=75)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[nonmr_fetch_threshold] 
(batchId=73)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge9] (batchId=24)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_noscan_1] 
(batchId=51)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[union26] (batchId=59)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
 (batchId=131)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[bucket6] 
(batchId=132)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic] 
(batchId=132)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[transform_ppr1] 
(batchId=132)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[autoColumnStats_2]
 (batchId=150)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[current_date_timestamp]
 (batchId=144)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[deleteAnalyze]
 (batchId=141)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_merge9] 
(batchId=140)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[orc_merge9]
 (batchId=156)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[groupby_map_ppr_multi_distinct]
 (batchId=113)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[join28] 
(batchId=128)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[list_bucket_dml_2] 
(batchId=97)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[load_dyn_part1] 
(batchId=128)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[load_dyn_part3] 
(batchId=97)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[louter_join_ppr] 
(batchId=110)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[mapjoin_mapjoin] 
(batchId=113)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[sample1] 
(batchId=97)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[transform_ppr2] 
(batchId=110)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[union_lateralview] 
(batchId=103)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
 (batchId=164)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] (batchId=164)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] (batchId=164)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1743/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1743/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1743/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 40 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834626 - PreCommit-HIVE-Build

> S3: Stats gathering for insert queries can be expensive for partitioned 
> dataset
> ---
>
> Key: HIVE-14803
>   

[jira] [Commented] (HIVE-13853) Add X-XSRF-Header filter to HS2 HTTP mode and WebHCat

2016-10-21 Thread Shannon Ladymon (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596832#comment-15596832
 ] 

Shannon Ladymon commented on HIVE-13853:


Doc done. Removing TODOC label.

> Add X-XSRF-Header filter to HS2 HTTP mode and WebHCat
> -
>
> Key: HIVE-13853
> URL: https://issues.apache.org/jira/browse/HIVE-13853
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, WebHCat
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Fix For: 2.1.0
>
> Attachments: HIVE-13853.2.patch, HIVE-13853.patch
>
>
> There is a possibility that there may be a CSRF-based attack on various 
> hadoop components, and thus, there is an effort to add a block for all 
> incoming http requests if they do not contain a X-XSRF-Header header. (See 
> HADOOP-12691 for motivation)
> This has potential to affect HS2 when running on thrift-over-http mode(if 
> cookie-based-auth is used), and webhcat.
> We introduce new flags to determine whether or not we're using the filter, 
> and if we are, we will automatically reject any http requests which do not 
> contain this header.
> To allow this to work, we also need to make changes to our JDBC driver to 
> automatically inject this header into any requests it makes. Also, any 
> client-side programs/api not using the JDBC driver directly will need to make 
> changes to add a X-XSRF-Header header to the request to make calls to 
> HS2/WebHCat if this filter is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14099) Hive security authorization can be disabled by users

2016-10-21 Thread Shannon Ladymon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shannon Ladymon updated HIVE-14099:
---
Labels:   (was: TODOC2.2)

> Hive security authorization can be disabled by users
> 
>
> Key: HIVE-14099
> URL: https://issues.apache.org/jira/browse/HIVE-14099
> Project: Hive
>  Issue Type: Improvement
>  Components: Authorization
>Affects Versions: 0.13.1
>Reporter: Prashant Kumar Singh
>Assignee: Aihua Xu
> Fix For: 2.2.0
>
> Attachments: HIVE-14099.1.patch
>
>
> In case we enables :
> hive.security.authorization.enabled=true in hive-site.xml
> this setting can be disabled by users at their hive prompt. There should be 
> hardcoded setting in the configs.
> The other thing is once we enable authorization, the tables that got created 
> before enabling looses access as they don't have authorization defined. How 
> this situation can be tackled in hive.
> Note that this issue does not affect SQL standard or ranger authorization 
> plugin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14099) Hive security authorization can be disabled by users

2016-10-21 Thread Shannon Ladymon (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596830#comment-15596830
 ] 

Shannon Ladymon commented on HIVE-14099:


Doc done. Removing TODOC2.2 label.

> Hive security authorization can be disabled by users
> 
>
> Key: HIVE-14099
> URL: https://issues.apache.org/jira/browse/HIVE-14099
> Project: Hive
>  Issue Type: Improvement
>  Components: Authorization
>Affects Versions: 0.13.1
>Reporter: Prashant Kumar Singh
>Assignee: Aihua Xu
> Fix For: 2.2.0
>
> Attachments: HIVE-14099.1.patch
>
>
> In case we enables :
> hive.security.authorization.enabled=true in hive-site.xml
> this setting can be disabled by users at their hive prompt. There should be 
> hardcoded setting in the configs.
> The other thing is once we enable authorization, the tables that got created 
> before enabling looses access as they don't have authorization defined. How 
> this situation can be tackled in hive.
> Note that this issue does not affect SQL standard or ranger authorization 
> plugin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13853) Add X-XSRF-Header filter to HS2 HTTP mode and WebHCat

2016-10-21 Thread Shannon Ladymon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shannon Ladymon updated HIVE-13853:
---
Labels:   (was: TODOC2.1)

> Add X-XSRF-Header filter to HS2 HTTP mode and WebHCat
> -
>
> Key: HIVE-13853
> URL: https://issues.apache.org/jira/browse/HIVE-13853
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, WebHCat
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Fix For: 2.1.0
>
> Attachments: HIVE-13853.2.patch, HIVE-13853.patch
>
>
> There is a possibility that there may be a CSRF-based attack on various 
> hadoop components, and thus, there is an effort to add a block for all 
> incoming http requests if they do not contain a X-XSRF-Header header. (See 
> HADOOP-12691 for motivation)
> This has potential to affect HS2 when running on thrift-over-http mode(if 
> cookie-based-auth is used), and webhcat.
> We introduce new flags to determine whether or not we're using the filter, 
> and if we are, we will automatically reject any http requests which do not 
> contain this header.
> To allow this to work, we also need to make changes to our JDBC driver to 
> automatically inject this header into any requests it makes. Also, any 
> client-side programs/api not using the JDBC driver directly will need to make 
> changes to add a X-XSRF-Header header to the request to make calls to 
> HS2/WebHCat if this filter is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-13853) Add X-XSRF-Header filter to HS2 HTTP mode and WebHCat

2016-10-21 Thread Shannon Ladymon (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596828#comment-15596828
 ] 

Shannon Ladymon commented on HIVE-13853:


Doc note:  
This adds the HiveConf parameter *hive.server2.xsrf.filter.enabled*:

* [Configuration Properties -- hive.server2.xsrf.filter.enabled | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.server2.xsrf.filter.enabled]

This also adds "hive.server2.xsrf.filter.enabled" to the default value of 
*hive.conf.restricted.list* which was created by HIVE-2935 in release 0.11.0 
and changed by HIVE-5953 in 0.13.0 and HIVE-6437 in 0.14.0:

* [Configuration Properties -- hive.conf.restricted.list | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.conf.restricted.list]

The default value is changed again by HIVE-14099 (2.2.0).


> Add X-XSRF-Header filter to HS2 HTTP mode and WebHCat
> -
>
> Key: HIVE-13853
> URL: https://issues.apache.org/jira/browse/HIVE-13853
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, WebHCat
>Reporter: Sushanth Sowmyan
>Assignee: Sushanth Sowmyan
> Fix For: 2.1.0
>
> Attachments: HIVE-13853.2.patch, HIVE-13853.patch
>
>
> There is a possibility that there may be a CSRF-based attack on various 
> hadoop components, and thus, there is an effort to add a block for all 
> incoming http requests if they do not contain a X-XSRF-Header header. (See 
> HADOOP-12691 for motivation)
> This has potential to affect HS2 when running on thrift-over-http mode(if 
> cookie-based-auth is used), and webhcat.
> We introduce new flags to determine whether or not we're using the filter, 
> and if we are, we will automatically reject any http requests which do not 
> contain this header.
> To allow this to work, we also need to make changes to our JDBC driver to 
> automatically inject this header into any requests it makes. Also, any 
> client-side programs/api not using the JDBC driver directly will need to make 
> changes to add a X-XSRF-Header header to the request to make calls to 
> HS2/WebHCat if this filter is enabled.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-6437) DefaultHiveAuthorizationProvider should not initialize a new HiveConf

2016-10-21 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596795#comment-15596795
 ] 

Lefty Leverenz commented on HIVE-6437:
--

Doc note:  This adds "hive.users.in.admin.role" to the default value of 
*hive.conf.restricted.list* which was created by HIVE-2935 in release 0.11.0 
and changed by HIVE-5953 in 0.13.0.

* [Configuration Properties -- hive.conf.restricted.list | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.conf.restricted.list]

The default value is changed again by HIVE-13853 (2.1.0) and HIVE-14099 (2.2.0).

> DefaultHiveAuthorizationProvider should not initialize a new HiveConf
> -
>
> Key: HIVE-6437
> URL: https://issues.apache.org/jira/browse/HIVE-6437
> Project: Hive
>  Issue Type: Bug
>  Components: Configuration
>Affects Versions: 0.13.0
>Reporter: Harsh J
>Assignee: Navis
>Priority: Trivial
> Fix For: 0.14.0
>
> Attachments: HIVE-6437.1.patch.txt, HIVE-6437.2.patch.txt, 
> HIVE-6437.3.patch.txt, HIVE-6437.4.patch.txt, HIVE-6437.5.patch.txt, 
> HIVE-6437.6.patch.txt, HIVE-6437.7.patch.txt
>
>
> During a HS2 connection, every SessionState got initializes a new 
> DefaultHiveAuthorizationProvider object (on stock configs).
> In turn, DefaultHiveAuthorizationProvider carries a {{new HiveConf(…)}} that 
> may prove too expensive, and unnecessary to do, since SessionState itself 
> sends in a fully applied HiveConf to it in the first place.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14993) make WriteEntity distinguish writeType

2016-10-21 Thread Wei Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596793#comment-15596793
 ] 

Wei Zheng commented on HIVE-14993:
--

patch looks good. +1

> make WriteEntity distinguish writeType
> --
>
> Key: HIVE-14993
> URL: https://issues.apache.org/jira/browse/HIVE-14993
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-14993.2.patch, HIVE-14993.3.patch, 
> HIVE-14993.4.patch, HIVE-14993.5.patch, HIVE-14993.6.patch, 
> HIVE-14993.7.patch, HIVE-14993.8.patch, HIVE-14993.patch, 
> debug.not2checkin.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-5953) SQL std auth - authorize grant/revoke on table

2016-10-21 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596785#comment-15596785
 ] 

Lefty Leverenz commented on HIVE-5953:
--

Doc note:  This changes the default value of *hive.conf.restricted.list* which 
was created by HIVE-2935 in release 0.11.0.

* [Configuration Properties -- hive.conf.restricted.list | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.conf.restricted.list]

The default value is changed again by HIVE-6437 (0.14.0), HIVE-13853 (2.1.0), 
and HIVE-14099 (2.2.0).

> SQL std auth - authorize grant/revoke on table
> --
>
> Key: HIVE-5953
> URL: https://issues.apache.org/jira/browse/HIVE-5953
> Project: Hive
>  Issue Type: Sub-task
>  Components: Authorization
>Reporter: Thejas M Nair
>Assignee: Thejas M Nair
> Fix For: 0.13.0
>
> Attachments: HIVE-5953.1.patch, HIVE-5953.2.patch, HIVE-5953.3.patch, 
> HIVE-5953.4.patch, HIVE-5953.5.patch, HIVE-5953.6.patch
>
>   Original Estimate: 120h
>  Time Spent: 144h
>  Remaining Estimate: 0h
>
> User (or a role user belongs to ) should have grant privileges to be able to 
> grant/revoke privileges for a user/role.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12765) Support Intersect (distinct/all) Except (distinct/all) Minus (distinct/all)

2016-10-21 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-12765:
---
Status: Patch Available  (was: Open)

address [~ashutoshc]'s comments.

> Support Intersect (distinct/all) Except (distinct/all) Minus (distinct/all)
> ---
>
> Key: HIVE-12765
> URL: https://issues.apache.org/jira/browse/HIVE-12765
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12765.01.patch, HIVE-12765.02.patch, 
> HIVE-12765.03.patch, HIVE-12765.04.patch, HIVE-12765.05.patch, 
> HIVE-12765.06.patch, HIVE-12765.07.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12765) Support Intersect (distinct/all) Except (distinct/all) Minus (distinct/all)

2016-10-21 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-12765:
---
Status: Open  (was: Patch Available)

> Support Intersect (distinct/all) Except (distinct/all) Minus (distinct/all)
> ---
>
> Key: HIVE-12765
> URL: https://issues.apache.org/jira/browse/HIVE-12765
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12765.01.patch, HIVE-12765.02.patch, 
> HIVE-12765.03.patch, HIVE-12765.04.patch, HIVE-12765.05.patch, 
> HIVE-12765.06.patch, HIVE-12765.07.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-12765) Support Intersect (distinct/all) Except (distinct/all) Minus (distinct/all)

2016-10-21 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-12765:
---
Attachment: HIVE-12765.07.patch

> Support Intersect (distinct/all) Except (distinct/all) Minus (distinct/all)
> ---
>
> Key: HIVE-12765
> URL: https://issues.apache.org/jira/browse/HIVE-12765
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-12765.01.patch, HIVE-12765.02.patch, 
> HIVE-12765.03.patch, HIVE-12765.04.patch, HIVE-12765.05.patch, 
> HIVE-12765.06.patch, HIVE-12765.07.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14964) Failing Test: Fix TestBeelineArgParsing tests

2016-10-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596755#comment-15596755
 ] 

Hive QA commented on HIVE-14964:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834696/HIVE-14964.1.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 10566 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic] 
(batchId=132)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[current_date_timestamp]
 (batchId=144)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
 (batchId=164)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] (batchId=164)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] (batchId=164)
org.apache.hive.beeline.TestClassNameCompleter.addingAndEmptyFile (batchId=164)
org.apache.hive.jdbc.TestJdbcWithLocalClusterSpark.org.apache.hive.jdbc.TestJdbcWithLocalClusterSpark
 (batchId=213)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1742/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1742/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1742/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834696 - PreCommit-HIVE-Build

> Failing Test: Fix TestBeelineArgParsing tests
> -
>
> Key: HIVE-14964
> URL: https://issues.apache.org/jira/browse/HIVE-14964
> Project: Hive
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Jason Dere
>Assignee: Zoltan Haindrich
> Attachments: HIVE-14964.1.patch
>
>
> Failing last several builds:
> {noformat}
>  org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] 0.12 
> sec12
>  
> org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
> 29 ms   12
>  org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] 42 ms   
> 12
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15032) Update/Delete statements use dynamic partitions when it's not necessary

2016-10-21 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-15032:
--
Description: 
{noformat}
create table if not exists TAB_PART (a int, b int)  partitioned by (p string) 
clustered by (a) into 2  buckets stored as orc TBLPROPERTIES 
('transactional'='true')

   insert into TAB_PART partition(p='blah') values(1,2) //this uses static part
update TAB_PART set b = 7 where p = 'blah' //this uses DP... WHY?
{noformat}

the Update is rewritten into an Insert stmt but SemanticAnalzyer.genFileSink() 
for this Insert is set up with dynamic partitions

at least in theory, we should be able to analyze the WHERE clause so that 
Insert doesn't have to use DP.

Another important side effect of this is how locks are acquired.  If the table 
doesn't have partition 'blah', ss it is, a SHARED_WRITE is acquired on the 
TAB_PART table.
However it would suffice to acquire a SHARED_WRITE on the single partition 
operated on, or better yet, short circuit the query.

If the table does have partition 'blah', we get only the partition lock

see TestDbTxnManager2.testWriteSetTracking3() testWriteSetTracking5()

  was:
{noformat}
create table if not exists TAB_PART (a int, b int)  partitioned by (p string) 
clustered by (a) into 2  buckets stored as orc TBLPROPERTIES 
('transactional'='true')

   insert into TAB_PART partition(p='blah') values(1,2) //this uses static part
update TAB_PART set b = 7 where p = 'blah' //this uses DP... WHY?
{noformat}

the Update is rewritten into an Insert stmt but SemanticAnalzyer.genFileSink() 
for this Insert is set up with dynamic partitions

at least in theory, we should be able to analyze the WHERE clause so that 
Insert doesn't have to use DP.

Another important side effect of this is how locks are acquired.  If the table 
doesn't have partition 'blah', ss it is, a SHARED_WRITE is acquired on the 
TAB_PART table.
However it would suffice to acquire a SHARED_WRITE on the single partition 
operated on, or better yet, short circuit the query.

If the table does have partition 'blah', we get only the partition lock


> Update/Delete statements use dynamic partitions when it's not necessary
> ---
>
> Key: HIVE-15032
> URL: https://issues.apache.org/jira/browse/HIVE-15032
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>
> {noformat}
> create table if not exists TAB_PART (a int, b int)  partitioned by (p string) 
> clustered by (a) into 2  buckets stored as orc TBLPROPERTIES 
> ('transactional'='true')
>insert into TAB_PART partition(p='blah') values(1,2) //this uses static 
> part
> update TAB_PART set b = 7 where p = 'blah' //this uses DP... WHY?
> {noformat}
> the Update is rewritten into an Insert stmt but 
> SemanticAnalzyer.genFileSink() for this Insert is set up with dynamic 
> partitions
> at least in theory, we should be able to analyze the WHERE clause so that 
> Insert doesn't have to use DP.
> Another important side effect of this is how locks are acquired.  If the 
> table doesn't have partition 'blah', ss it is, a SHARED_WRITE is acquired on 
> the TAB_PART table.
> However it would suffice to acquire a SHARED_WRITE on the single partition 
> operated on, or better yet, short circuit the query.
> If the table does have partition 'blah', we get only the partition lock
> see TestDbTxnManager2.testWriteSetTracking3() testWriteSetTracking5()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14887) Reduce the memory requirements for tests

2016-10-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596652#comment-15596652
 ] 

Hive QA commented on HIVE-14887:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834714/HIVE-14887.06.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 10564 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic] 
(batchId=132)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[current_date_timestamp]
 (batchId=144)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
 (batchId=164)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] (batchId=164)
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] (batchId=164)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1741/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1741/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1741/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834714 - PreCommit-HIVE-Build

> Reduce the memory requirements for tests
> 
>
> Key: HIVE-14887
> URL: https://issues.apache.org/jira/browse/HIVE-14887
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-14887.01.patch, HIVE-14887.02.patch, 
> HIVE-14887.03.patch, HIVE-14887.04.patch, HIVE-14887.05.patch, 
> HIVE-14887.06.patch
>
>
> The clusters that we spin up end up requiring 16GB at times. Also the maven 
> arguments seem a little heavy weight.
> Reducing this will allow for additional ptest drones per box, which should 
> bring down the runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13212) locking too coarse/broad for update/delete on a pratition

2016-10-21 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-13212:
--
Description: 
create table acidTblPart (a int, b int) partitioned by (p string) clustered by 
(a) into " + BUCKET_COUNT + " buckets stored as orc TBLPROPERTIES 
('transactional'='true')

update acidTblPart set b = 17 where p = 1

This acquires share_write on the table while based on p = 1 we should be able 
to figure out that only 1 partition is affected and only lock the partition

Same should apply to DELETE

Above is true when table is empty.  If table has data, in particular it has p=1 
partition, then only the partition is locked.

However "update acidTblPart set b = 17 where b = 18" and the table is not 
empty, will lock every partition separately.
For a table with 100K partitions this will be a performance issue.
Need to look into getting a table level lock instead or build general lock 
promotion logic.

The logic in SemanticAnalyzer seems to be to take all known partitions of a 
table being read and create ReadEntity objects for those that match the WHERE 
clause.
A ReadEntity for the table is also created but due to logic in 
UpdateDeleteSemanticAnalyzer we ignore it.
(We set setUpdateOrDelete() on it but remove the corresponding WriteEntity and 
replace it with WriteEntity for each partition)

  was:
create table acidTblPart (a int, b int) partitioned by (p string) clustered by 
(a) into " + BUCKET_COUNT + " buckets stored as orc TBLPROPERTIES 
('transactional'='true')

update acidTblPart set b = 17 where p = 1

This acquires share_write on the table while based on p = 1 we should be able 
to figure out that only 1 partition is affected and only lock the partition

Same should apply to DELETE

Above is true when table is empty.  If table has data, in particular it has p=1 
partition, then only the partition is locked.

However "update acidTblPart set b = 17 where b = 18" and the table is not 
empty, will lock every partition separately.
For a table with 100K partitions this will be a performance issue.
Need to look into getting a table level lock instead or build general lock 
promotion logic.


> locking too coarse/broad for update/delete on a pratition
> -
>
> Key: HIVE-13212
> URL: https://issues.apache.org/jira/browse/HIVE-13212
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.2.1
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>
> create table acidTblPart (a int, b int) partitioned by (p string) clustered 
> by (a) into " + BUCKET_COUNT + " buckets stored as orc TBLPROPERTIES 
> ('transactional'='true')
> update acidTblPart set b = 17 where p = 1
> This acquires share_write on the table while based on p = 1 we should be able 
> to figure out that only 1 partition is affected and only lock the partition
> Same should apply to DELETE
> Above is true when table is empty.  If table has data, in particular it has 
> p=1 partition, then only the partition is locked.
> However "update acidTblPart set b = 17 where b = 18" and the table is not 
> empty, will lock every partition separately.
> For a table with 100K partitions this will be a performance issue.
> Need to look into getting a table level lock instead or build general lock 
> promotion logic.
> The logic in SemanticAnalyzer seems to be to take all known partitions of a 
> table being read and create ReadEntity objects for those that match the WHERE 
> clause.
> A ReadEntity for the table is also created but due to logic in 
> UpdateDeleteSemanticAnalyzer we ignore it.
> (We set setUpdateOrDelete() on it but remove the corresponding WriteEntity 
> and replace it with WriteEntity for each partition)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15025) Secure-Socket-Layer (SSL) support for HMS

2016-10-21 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596629#comment-15596629
 ] 

Aihua Xu commented on HIVE-15025:
-

Fixed the compile issue.

> Secure-Socket-Layer (SSL) support for HMS
> -
>
> Key: HIVE-15025
> URL: https://issues.apache.org/jira/browse/HIVE-15025
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 2.2.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-15025.1.patch, HIVE-15025.2.patch
>
>
> HMS server should support SSL encryption. When the server is keberos enabled, 
> the encryption can be enabled. But if keberos is not enabled, then there is 
> no encryption between HS2 and HMS. 
> Similar to HS2, we should support encryption in both cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15025) Secure-Socket-Layer (SSL) support for HMS

2016-10-21 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-15025:

Status: In Progress  (was: Patch Available)

> Secure-Socket-Layer (SSL) support for HMS
> -
>
> Key: HIVE-15025
> URL: https://issues.apache.org/jira/browse/HIVE-15025
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 2.2.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-15025.1.patch, HIVE-15025.2.patch
>
>
> HMS server should support SSL encryption. When the server is keberos enabled, 
> the encryption can be enabled. But if keberos is not enabled, then there is 
> no encryption between HS2 and HMS. 
> Similar to HS2, we should support encryption in both cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15025) Secure-Socket-Layer (SSL) support for HMS

2016-10-21 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-15025:

Status: Patch Available  (was: In Progress)

> Secure-Socket-Layer (SSL) support for HMS
> -
>
> Key: HIVE-15025
> URL: https://issues.apache.org/jira/browse/HIVE-15025
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 2.2.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-15025.1.patch, HIVE-15025.2.patch
>
>
> HMS server should support SSL encryption. When the server is keberos enabled, 
> the encryption can be enabled. But if keberos is not enabled, then there is 
> no encryption between HS2 and HMS. 
> Similar to HS2, we should support encryption in both cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15025) Secure-Socket-Layer (SSL) support for HMS

2016-10-21 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-15025:

Attachment: HIVE-15025.2.patch

> Secure-Socket-Layer (SSL) support for HMS
> -
>
> Key: HIVE-15025
> URL: https://issues.apache.org/jira/browse/HIVE-15025
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 2.2.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-15025.1.patch, HIVE-15025.2.patch
>
>
> HMS server should support SSL encryption. When the server is keberos enabled, 
> the encryption can be enabled. But if keberos is not enabled, then there is 
> no encryption between HS2 and HMS. 
> Similar to HS2, we should support encryption in both cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14953) don't use globStatus on S3 in MM tables

2016-10-21 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14953:

Attachment: (was: HIVE-14953.01.patch)

> don't use globStatus on S3 in MM tables
> ---
>
> Key: HIVE-14953
> URL: https://issues.apache.org/jira/browse/HIVE-14953
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Rajesh Balamohan
>Assignee: Sergey Shelukhin
> Fix For: hive-14535
>
> Attachments: HIVE-14953.01.patch, HIVE-14953.patch
>
>
> Need to investigate if recursive get is faster. Also, normal listStatus might 
> suffice because MM code handles directory structure in a more definite manner 
> than old code; so it knows where the files of interest are to be found.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14953) don't use globStatus on S3 in MM tables

2016-10-21 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14953:

Attachment: HIVE-14953.01.patch

> don't use globStatus on S3 in MM tables
> ---
>
> Key: HIVE-14953
> URL: https://issues.apache.org/jira/browse/HIVE-14953
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Rajesh Balamohan
>Assignee: Sergey Shelukhin
> Fix For: hive-14535
>
> Attachments: HIVE-14953.01.patch, HIVE-14953.patch
>
>
> Need to investigate if recursive get is faster. Also, normal listStatus might 
> suffice because MM code handles directory structure in a more definite manner 
> than old code; so it knows where the files of interest are to be found.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14953) don't use globStatus on S3 in MM tables

2016-10-21 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-14953:

Attachment: HIVE-14953.01.patch

Updated patch. Unfortunately the local logic for s3 is ugly, given what it 
returns. [~rajesh.balamohan] does this make sense?

> don't use globStatus on S3 in MM tables
> ---
>
> Key: HIVE-14953
> URL: https://issues.apache.org/jira/browse/HIVE-14953
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Rajesh Balamohan
>Assignee: Sergey Shelukhin
> Fix For: hive-14535
>
> Attachments: HIVE-14953.01.patch, HIVE-14953.patch
>
>
> Need to investigate if recursive get is faster. Also, normal listStatus might 
> suffice because MM code handles directory structure in a more definite manner 
> than old code; so it knows where the files of interest are to be found.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15032) Update/Delete statements use dynamic partitions when it's not necessary

2016-10-21 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-15032:
--
Description: 
{noformat}
create table if not exists TAB_PART (a int, b int)  partitioned by (p string) 
clustered by (a) into 2  buckets stored as orc TBLPROPERTIES 
('transactional'='true')

   insert into TAB_PART partition(p='blah') values(1,2) //this uses static part
update TAB_PART set b = 7 where p = 'blah' //this uses DP... WHY?
{noformat}

the Update is rewritten into an Insert stmt but SemanticAnalzyer.genFileSink() 
for this Insert is set up with dynamic partitions

at least in theory, we should be able to analyze the WHERE clause so that 
Insert doesn't have to use DP.

Another important side effect of this is how locks are acquired.  If the table 
doesn't have partition 'blah', ss it is, a SHARED_WRITE is acquired on the 
TAB_PART table.
However it would suffice to acquire a SHARED_WRITE on the single partition 
operated on, or better yet, short circuit the query.

If the table does have partition 'blah', we get only the partition lock

  was:
{noformat}
create table if not exists TAB_PART (a int, b int)  partitioned by (p string) 
clustered by (a) into 2  buckets stored as orc TBLPROPERTIES 
('transactional'='true')

   insert into TAB_PART partition(p='blah') values(1,2) //this uses static part
update TAB_PART set b = 7 where p = 'blah' //this uses DP... WHY?
{noformat}

the Update is rewritten into an Insert stmt but SemanticAnalzyer.genFileSink() 
for this Insert is set up with dynamic partitions

at least in theory, we should be able to analyze the WHERE clause so that 
Insert doesn't have to use DP.

Another important side effect of this is how locks are acquired.  As it is, a 
SHARED_WRITE is acquired on the TAB_PART table.
However it would suffice to acquire a SHARED_WRITE on the single partition 
operated on.


> Update/Delete statements use dynamic partitions when it's not necessary
> ---
>
> Key: HIVE-15032
> URL: https://issues.apache.org/jira/browse/HIVE-15032
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>
> {noformat}
> create table if not exists TAB_PART (a int, b int)  partitioned by (p string) 
> clustered by (a) into 2  buckets stored as orc TBLPROPERTIES 
> ('transactional'='true')
>insert into TAB_PART partition(p='blah') values(1,2) //this uses static 
> part
> update TAB_PART set b = 7 where p = 'blah' //this uses DP... WHY?
> {noformat}
> the Update is rewritten into an Insert stmt but 
> SemanticAnalzyer.genFileSink() for this Insert is set up with dynamic 
> partitions
> at least in theory, we should be able to analyze the WHERE clause so that 
> Insert doesn't have to use DP.
> Another important side effect of this is how locks are acquired.  If the 
> table doesn't have partition 'blah', ss it is, a SHARED_WRITE is acquired on 
> the TAB_PART table.
> However it would suffice to acquire a SHARED_WRITE on the single partition 
> operated on, or better yet, short circuit the query.
> If the table does have partition 'blah', we get only the partition lock



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15025) Secure-Socket-Layer (SSL) support for HMS

2016-10-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596575#comment-15596575
 ] 

Hive QA commented on HIVE-15025:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834750/HIVE-15025.1.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1740/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1740/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1740/

Messages:
{noformat}
 This message was trimmed, see log for full details 
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-core/1.14/jersey-core-1.14.jar(javax/ws/rs/core/Response.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/codehaus/jackson/jackson-mapper-asl/1.9.13/jackson-mapper-asl-1.9.13.jar(org/codehaus/jackson/map/ObjectMapper.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Exception.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Throwable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/Serializable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Enum.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/Comparable.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-server/1.14/jersey-server-1.14.jar(com/sun/jersey/api/core/PackagesResourceConfig.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/com/sun/jersey/jersey-servlet/1.14/jersey-servlet-1.14.jar(com/sun/jersey/spi/container/servlet/ServletContainer.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/FileInputStream.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/ql/target/hive-exec-2.2.0-SNAPSHOT.jar(org/apache/commons/lang3/StringUtils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/ql/target/hive-exec-2.2.0-SNAPSHOT.jar(org/apache/commons/lang3/ArrayUtils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/common/target/hive-common-2.2.0-SNAPSHOT.jar(org/apache/hadoop/hive/common/classification/InterfaceStability.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-hdfs/2.7.2/hadoop-hdfs-2.7.2.jar(org/apache/hadoop/hdfs/web/AuthFilter.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/shims/common/target/hive-shims-common-2.2.0-SNAPSHOT.jar(org/apache/hadoop/hive/shims/Utils.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/security/UserGroupInformation.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-auth/2.7.2/hadoop-auth-2.7.2.jar(org/apache/hadoop/security/authentication/client/PseudoAuthenticator.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-auth/2.7.2/hadoop-auth-2.7.2.jar(org/apache/hadoop/security/authentication/server/PseudoAuthenticationHandler.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/2.7.2/hadoop-common-2.7.2.jar(org/apache/hadoop/util/GenericOptionsParser.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/aggregate/jetty-all-server/7.6.0.v20120127/jetty-all-server-7.6.0.v20120127.jar(org/eclipse/jetty/rewrite/handler/RedirectPatternRule.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/aggregate/jetty-all-server/7.6.0.v20120127/jetty-all-server-7.6.0.v20120127.jar(org/eclipse/jetty/rewrite/handler/RewriteHandler.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/aggregate/jetty-all-server/7.6.0.v20120127/jetty-all-server-7.6.0.v20120127.jar(org/eclipse/jetty/server/Handler.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/aggregate/jetty-all-server/7.6.0.v20120127/jetty-all-server-7.6.0.v20120127.jar(org/eclipse/jetty/server/Server.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/aggregate/jetty-all-server/7.6.0.v20120127/jetty-all-server-7.6.0.v20120127.jar(org/eclipse/jetty/server/handler/HandlerList.class)]]
[loading 

[jira] [Commented] (HIVE-14968) Fix compilation failure on branch-1

2016-10-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596550#comment-15596550
 ] 

Hive QA commented on HIVE-14968:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834746/HIVE-14968.3-branch-1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 223 failed/errored test(s), 8067 tests 
executed
*Failed tests:*
{noformat}
TestAcidOnTez - did not produce a TEST-*.xml file (likely timed out) 
(batchId=383)
TestAdminUser - did not produce a TEST-*.xml file (likely timed out) 
(batchId=365)
TestAuthorizationPreEventListener - did not produce a TEST-*.xml file (likely 
timed out) (batchId=398)
TestAuthzApiEmbedAuthorizerInEmbed - did not produce a TEST-*.xml file (likely 
timed out) (batchId=375)
TestAuthzApiEmbedAuthorizerInRemote - did not produce a TEST-*.xml file (likely 
timed out) (batchId=381)
TestBeeLineWithArgs - did not produce a TEST-*.xml file (likely timed out) 
(batchId=405)
TestBitFieldReader - did not produce a TEST-*.xml file (likely timed out) 
(batchId=489)
TestBitPack - did not produce a TEST-*.xml file (likely timed out) (batchId=485)
TestBuddyAllocator - did not produce a TEST-*.xml file (likely timed out) 
(batchId=829)
TestCLIAuthzSessionContext - did not produce a TEST-*.xml file (likely timed 
out) (batchId=422)
TestClearDanglingScratchDir - did not produce a TEST-*.xml file (likely timed 
out) (batchId=390)
TestClientSideAuthorizationProvider - did not produce a TEST-*.xml file (likely 
timed out) (batchId=397)
TestColumnStatistics - did not produce a TEST-*.xml file (likely timed out) 
(batchId=465)
TestColumnStatisticsImpl - did not produce a TEST-*.xml file (likely timed out) 
(batchId=479)
TestCompactor - did not produce a TEST-*.xml file (likely timed out) 
(batchId=386)
TestConverters - did not produce a TEST-*.xml file (likely timed out) 
(batchId=774)
TestCreateUdfEntities - did not produce a TEST-*.xml file (likely timed out) 
(batchId=385)
TestCustomAuthentication - did not produce a TEST-*.xml file (likely timed out) 
(batchId=406)
TestDBTokenStore - did not produce a TEST-*.xml file (likely timed out) 
(batchId=349)
TestDDLWithRemoteMetastoreSecondNamenode - did not produce a TEST-*.xml file 
(likely timed out) (batchId=384)
TestDataReaderProperties - did not produce a TEST-*.xml file (likely timed out) 
(batchId=483)
TestDruidSerDe - did not produce a TEST-*.xml file (likely timed out) 
(batchId=463)
TestDynamicArray - did not produce a TEST-*.xml file (likely timed out) 
(batchId=481)
TestDynamicSerDe - did not produce a TEST-*.xml file (likely timed out) 
(batchId=352)
TestEmbeddedHiveMetaStore - did not produce a TEST-*.xml file (likely timed 
out) (batchId=362)
TestEmbeddedThriftBinaryCLIService - did not produce a TEST-*.xml file (likely 
timed out) (batchId=409)
TestFileDump - did not produce a TEST-*.xml file (likely timed out) 
(batchId=467)
TestFilterHooks - did not produce a TEST-*.xml file (likely timed out) 
(batchId=357)
TestFirstInFirstOutComparator - did not produce a TEST-*.xml file (likely timed 
out) (batchId=838)
TestFolderPermissions - did not produce a TEST-*.xml file (likely timed out) 
(batchId=392)
TestHS2AuthzContext - did not produce a TEST-*.xml file (likely timed out) 
(batchId=425)
TestHS2AuthzSessionContext - did not produce a TEST-*.xml file (likely timed 
out) (batchId=426)
TestHS2ImpersonationWithRemoteMS - did not produce a TEST-*.xml file (likely 
timed out) (batchId=413)
TestHiveAuthorizerCheckInvocation - did not produce a TEST-*.xml file (likely 
timed out) (batchId=401)
TestHiveAuthorizerShowFilters - did not produce a TEST-*.xml file (likely timed 
out) (batchId=400)
TestHiveDruidQueryBasedInputFormat - did not produce a TEST-*.xml file (likely 
timed out) (batchId=462)
TestHiveHistory - did not produce a TEST-*.xml file (likely timed out) 
(batchId=403)
TestHiveMetaStoreTxns - did not produce a TEST-*.xml file (likely timed out) 
(batchId=377)
TestHiveMetaStoreWithEnvironmentContext - did not produce a TEST-*.xml file 
(likely timed out) (batchId=367)
TestHiveMetaTool - did not produce a TEST-*.xml file (likely timed out) 
(batchId=380)
TestHiveServer2 - did not produce a TEST-*.xml file (likely timed out) 
(batchId=428)
TestHiveServer2SessionTimeout - did not produce a TEST-*.xml file (likely timed 
out) (batchId=429)
TestHiveSessionImpl - did not produce a TEST-*.xml file (likely timed out) 
(batchId=410)
TestHplsqlLocal - did not produce a TEST-*.xml file (likely timed out) 
(batchId=497)
TestHplsqlOffline - did not produce a TEST-*.xml file (likely timed out) 
(batchId=496)
TestHs2Hooks - did not produce a TEST-*.xml file (likely timed out) 
(batchId=382)
TestHs2HooksWithMiniKdc - did not produce a TEST-*.xml file (likely timed out) 
(batchId=457)
TestInStream - did not produce a TEST-*.xml file (likely 

[jira] [Commented] (HIVE-15031) Fix flaky LLAP unit test (TestSlotZNode)

2016-10-21 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596457#comment-15596457
 ] 

Siddharth Seth commented on HIVE-15031:
---

Yep. ptest has been failing because of the ongoing DNS outage. github.com and 
raw.githubusercontent.com have been taking turns, which has cause ptest 
failures at various stages.

> Fix flaky LLAP unit test (TestSlotZNode)
> 
>
> Key: HIVE-15031
> URL: https://issues.apache.org/jira/browse/HIVE-15031
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-15031.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14496) Enable Calcite rewriting with materialized views

2016-10-21 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596413#comment-15596413
 ] 

Ashutosh Chauhan commented on HIVE-14496:
-

[~jcamachorodriguez] Can you create a RB entry for this?

> Enable Calcite rewriting with materialized views
> 
>
> Key: HIVE-14496
> URL: https://issues.apache.org/jira/browse/HIVE-14496
> Project: Hive
>  Issue Type: Sub-task
>  Components: Materialized views
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-14496.01.patch, HIVE-14496.02.patch, 
> HIVE-14496.patch
>
>
> Calcite already supports query rewriting using materialized views. We will 
> use it to support this feature in Hive.
> In order to do that, we need to register the existing materialized views with 
> Calcite view service and enable the materialized views rewriting rules. 
> We should include a HiveConf flag to completely disable query rewriting using 
> materialized views if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-15032) Update/Delete statements use dynamic partitions when it's not necessary

2016-10-21 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596364#comment-15596364
 ] 

Eugene Koifman edited comment on HIVE-15032 at 10/21/16 9:20 PM:
-

Some pointers from [~jcamachorodriguez]
{noformat}
Currently the partition pruning optimization is carried out in CBO, while 
partition column pruner removal (i.e. removal of parts of a Filter predicate 
that is not necessary because partition pruning kicked in) is still executed in 
Hive.

I could not figure out what your question refers to, but I attach some pointers:
- Partition column property is carried out in ExprNodeColumnDesc. Further, 
Table in org.apache.hadoop.hive.ql.metadata exposes information about the 
partition columns for each Table.
- RelOptHiveTable contains logic to compute the necessary partition list for a 
given predicate. This logic relies on PartitionPruner, that can be found in 
org.apache.hadoop.hive.ql.optimizer.ppr.
- Partition pruning in CBO in carried out by HivePartitionPruneRule, which 
relies on the aforementioned method in RelOptHiveTable.
{noformat}

{noformat}
It seems to me that it will not be easy to do it directly from 
UpdateDeleteSemanticAnalyzer, 
as you are translating AST -> AST and thus the necessary aux structures to 
resolve the 
partitions for the query have not been created.

However, given an INSERT query, if Calcite has optimized the plan, you would be 
able to obtain 
the partitions from SemanticAnalyzer and rewrite to avoid dynamic partition 
insert. In particular,
 you can obtain the list of partitions for a given query from prunedPartitions 
in SemanticAnalyzer. 
Once again, this is only valid if Calcite has optimized the plan, as otherwise 
prunedPartitions is not 
populated till the partition pruner optimization runs in Hive.
{noformat}

One simple way to fix this is rewrite the query 2 times.  Update -> Insert and 
do SemanticAnalzyer.analyze() on it as is done currently in 
UpdateDeleteSemanticAnalzyer to create the structures Jesus is referring to and 
then generate another Insert with Static partition info in it (if any was 
found).  This should be well worth it to eliminate DP and "fix" locking along 
the way.

Another option is to look for results in SemanticAnalyzer of analyzing the 
WHERE clase and then modify the structures used in genFileSink() to supply 
static partition values.  Effectively, this teaches the optimizer to recognize 
that the Insert statement is reading and writing the same partition.  This is 
probably not common outside of Update/Delete/Merge.


was (Author: ekoifman):
Some pointers from [~jcamachorodriguez]
{noformat}
Currently the partition pruning optimization is carried out in CBO, while 
partition column pruner removal (i.e. removal of parts of a Filter predicate 
that is not necessary because partition pruning kicked in) is still executed in 
Hive.

I could not figure out what your question refers to, but I attach some pointers:
- Partition column property is carried out in ExprNodeColumnDesc. Further, 
Table in org.apache.hadoop.hive.ql.metadata exposes information about the 
partition columns for each Table.
- RelOptHiveTable contains logic to compute the necessary partition list for a 
given predicate. This logic relies on PartitionPruner, that can be found in 
org.apache.hadoop.hive.ql.optimizer.ppr.
- Partition pruning in CBO in carried out by HivePartitionPruneRule, which 
relies on the aforementioned method in RelOptHiveTable.
{noformat}

{noformat}
It seems to me that it will not be easy to do it directly from 
UpdateDeleteSemanticAnalyzer, as you are translating AST -> AST and thus the 
necessary aux structures to resolve the partitions for the query have not been 
created.

However, given an INSERT query, if Calcite has optimized the plan, you would be 
able to obtain the partitions from SemanticAnalyzer and rewrite to avoid 
dynamic partition insert. In particular, you can obtain the list of partitions 
for a given query from prunedPartitions in SemanticAnalyzer. Once again, this 
is only valid if Calcite has optimized the plan, as otherwise prunedPartitions 
is not populated till the partition pruner optimization runs in Hive.
{noformat}

One simple way to fix this is rewrite the query 2 times.  Update -> Insert and 
do SemanticAnalzyer.analyze() on it as is done currently in 
UpdateDeleteSemanticAnalzyer to create the structures Jesus is referring to and 
then generate another Insert with Static partition info in it (if any was 
found).  This should be well worth it to eliminate DP and "fix" locking along 
the way.

Another option is to look for results in SemanticAnalyzer of analyzing the 
WHERE clase and then modify the structures used in genFileSink() to supply 
static partition values.  Effectively, this teaches the optimizer to recognize 
that the Insert statement is reading and writing the same partition.  

[jira] [Updated] (HIVE-15032) Update/Delete statements use dynamic partitions when it's not necessary

2016-10-21 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-15032:
--
Description: 
{noformat}
create table if not exists TAB_PART (a int, b int)  partitioned by (p string) 
clustered by (a) into 2  buckets stored as orc TBLPROPERTIES 
('transactional'='true')

   insert into TAB_PART partition(p='blah') values(1,2) //this uses static part
update TAB_PART set b = 7 where p = 'blah' //this uses DP... WHY?
{noformat}

the Update is rewritten into an Insert stmt but SemanticAnalzyer.genFileSink() 
for this Insert is set up with dynamic partitions

at least in theory, we should be able to analyze the WHERE clause so that 
Insert doesn't have to use DP.

Another important side effect of this is how locks are acquired.  As it is, a 
SHARED_WRITE is acquired on the TAB_PART table.
However it would suffice to acquire a SHARED_WRITE on the single partition 
operated on.

  was:
{noformat}
create table if not exists TAB_PART (a int, b int)  partitioned by (p string) 
clustered by (a) into 2  buckets stored as orc TBLPROPERTIES 
('transactional'='true')

   insert into TAB_PART partition(p='blah') values(1,2) //this uses static part
update TAB_PART set b = 7 where p = 'blah' //this uses DP... WHY?
{noformat}

the Update is rewritten into an Insert stmt but SemanticAnalzyer.genFileSink() 
for this Insert is set up with dynamic partitions

at least in theory, we should be able to analyze the WHERE clause so that 
Insert doesn't have to use DP.


> Update/Delete statements use dynamic partitions when it's not necessary
> ---
>
> Key: HIVE-15032
> URL: https://issues.apache.org/jira/browse/HIVE-15032
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>
> {noformat}
> create table if not exists TAB_PART (a int, b int)  partitioned by (p string) 
> clustered by (a) into 2  buckets stored as orc TBLPROPERTIES 
> ('transactional'='true')
>insert into TAB_PART partition(p='blah') values(1,2) //this uses static 
> part
> update TAB_PART set b = 7 where p = 'blah' //this uses DP... WHY?
> {noformat}
> the Update is rewritten into an Insert stmt but 
> SemanticAnalzyer.genFileSink() for this Insert is set up with dynamic 
> partitions
> at least in theory, we should be able to analyze the WHERE clause so that 
> Insert doesn't have to use DP.
> Another important side effect of this is how locks are acquired.  As it is, a 
> SHARED_WRITE is acquired on the TAB_PART table.
> However it would suffice to acquire a SHARED_WRITE on the single partition 
> operated on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15032) Update/Delete statements use dynamic partitions when it's not necessary

2016-10-21 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596364#comment-15596364
 ] 

Eugene Koifman commented on HIVE-15032:
---

Some pointers from [~jcamachorodriguez]
{noformat}
Currently the partition pruning optimization is carried out in CBO, while 
partition column pruner removal (i.e. removal of parts of a Filter predicate 
that is not necessary because partition pruning kicked in) is still executed in 
Hive.

I could not figure out what your question refers to, but I attach some pointers:
- Partition column property is carried out in ExprNodeColumnDesc. Further, 
Table in org.apache.hadoop.hive.ql.metadata exposes information about the 
partition columns for each Table.
- RelOptHiveTable contains logic to compute the necessary partition list for a 
given predicate. This logic relies on PartitionPruner, that can be found in 
org.apache.hadoop.hive.ql.optimizer.ppr.
- Partition pruning in CBO in carried out by HivePartitionPruneRule, which 
relies on the aforementioned method in RelOptHiveTable.
{noformat}

{noformat}
It seems to me that it will not be easy to do it directly from 
UpdateDeleteSemanticAnalyzer, as you are translating AST -> AST and thus the 
necessary aux structures to resolve the partitions for the query have not been 
created.

However, given an INSERT query, if Calcite has optimized the plan, you would be 
able to obtain the partitions from SemanticAnalyzer and rewrite to avoid 
dynamic partition insert. In particular, you can obtain the list of partitions 
for a given query from prunedPartitions in SemanticAnalyzer. Once again, this 
is only valid if Calcite has optimized the plan, as otherwise prunedPartitions 
is not populated till the partition pruner optimization runs in Hive.
{noformat}

One simple way to fix this is rewrite the query 2 times.  Update -> Insert and 
do SemanticAnalzyer.analyze() on it as is done currently in 
UpdateDeleteSemanticAnalzyer to create the structures Jesus is referring to and 
then generate another Insert with Static partition info in it (if any was 
found).  This should be well worth it to eliminate DP and "fix" locking along 
the way.

Another option is to look for results in SemanticAnalyzer of analyzing the 
WHERE clase and then modify the structures used in genFileSink() to supply 
static partition values.  Effectively, this teaches the optimizer to recognize 
that the Insert statement is reading and writing the same partition.  This is 
probably not common outside of Update/Delete/Merge.

> Update/Delete statements use dynamic partitions when it's not necessary
> ---
>
> Key: HIVE-15032
> URL: https://issues.apache.org/jira/browse/HIVE-15032
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>
> {noformat}
> create table if not exists TAB_PART (a int, b int)  partitioned by (p string) 
> clustered by (a) into 2  buckets stored as orc TBLPROPERTIES 
> ('transactional'='true')
>insert into TAB_PART partition(p='blah') values(1,2) //this uses static 
> part
> update TAB_PART set b = 7 where p = 'blah' //this uses DP... WHY?
> {noformat}
> the Update is rewritten into an Insert stmt but 
> SemanticAnalzyer.genFileSink() for this Insert is set up with dynamic 
> partitions
> at least in theory, we should be able to analyze the WHERE clause so that 
> Insert doesn't have to use DP.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15025) Secure-Socket-Layer (SSL) support for HMS

2016-10-21 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-15025:

Status: Patch Available  (was: Open)

patch-1: initial patch. Refactoring some auth functions to common/auth so they 
can be used by HS2 and HMS. Added the configuration to enable HMS SSL and add 
ssl on top of thrift.

Did basic test. Will test on the real cluster.

> Secure-Socket-Layer (SSL) support for HMS
> -
>
> Key: HIVE-15025
> URL: https://issues.apache.org/jira/browse/HIVE-15025
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 2.2.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-15025.1.patch
>
>
> HMS server should support SSL encryption. When the server is keberos enabled, 
> the encryption can be enabled. But if keberos is not enabled, then there is 
> no encryption between HS2 and HMS. 
> Similar to HS2, we should support encryption in both cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15025) Secure-Socket-Layer (SSL) support for HMS

2016-10-21 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-15025:

Attachment: HIVE-15025.1.patch

> Secure-Socket-Layer (SSL) support for HMS
> -
>
> Key: HIVE-15025
> URL: https://issues.apache.org/jira/browse/HIVE-15025
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 2.2.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
> Attachments: HIVE-15025.1.patch
>
>
> HMS server should support SSL encryption. When the server is keberos enabled, 
> the encryption can be enabled. But if keberos is not enabled, then there is 
> no encryption between HS2 and HMS. 
> Similar to HS2, we should support encryption in both cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14884) Test result cleanup before 2.1.1 release

2016-10-21 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-14884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596301#comment-15596301
 ] 

Sergio Peña commented on HIVE-14884:


[~mmccline] Any updates on the test failures?

> Test result cleanup before 2.1.1 release
> 
>
> Key: HIVE-14884
> URL: https://issues.apache.org/jira/browse/HIVE-14884
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.1
>Reporter: Peter Vary
>Assignee: Peter Vary
> Attachments: HIVE-14884-branch-2.1.patch, 
> HIVE-14884.2-branch-2.1.patch
>
>
> There are multiple tests are failing on 2.1 branch.
> Before releasing 2.1.1 it would be good to clean up this list



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14968) Fix compilation failure on branch-1

2016-10-21 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-14968:
--
Attachment: HIVE-14968.3-branch-1.patch

Rename to HIVE-14968.3-branch-1.patch as per regex 
"^HIVE-[0-9]+(\.[0-9]+)?(-[A-Za-z0-9.-]+)?\.(patch|patch.txt)$" implies. The 
test failures is due to github.com DNS issue today, may have to rerun later.

> Fix compilation failure on branch-1
> ---
>
> Key: HIVE-14968
> URL: https://issues.apache.org/jira/browse/HIVE-14968
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Fix For: 1.3.0
>
> Attachments: HIVE-14968-branch-1.1.patch, 
> HIVE-14968-branch-1.2.patch, HIVE-14968.1.patch, HIVE-14968.3-branch-1.patch
>
>
> branch-1 compilation failure due to:
> HIVE-14436: Hive 1.2.1/Hitting "ql.Driver: FAILED: IllegalArgumentException 
> Error: , expected at the end of 'decimal(9'" after enabling 
> hive.optimize.skewjoin and with MR engine
> HIVE-14483 : java.lang.ArrayIndexOutOfBoundsException 
> org.apache.orc.impl.TreeReaderFactory.commonReadByteArrays
> 1.2 branch is fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15031) Fix flaky LLAP unit test (TestSlotZNode)

2016-10-21 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596248#comment-15596248
 ] 

Sergey Shelukhin commented on HIVE-15031:
-

I suspect that env issue is because github is down w/o a hosts file fix

> Fix flaky LLAP unit test (TestSlotZNode)
> 
>
> Key: HIVE-15031
> URL: https://issues.apache.org/jira/browse/HIVE-15031
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-15031.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15031) Fix flaky LLAP unit test (TestSlotZNode)

2016-10-21 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596233#comment-15596233
 ] 

Prasanth Jayachandran commented on HIVE-15031:
--

+1

There are some environment issues with ptest, I am wondering if this flakiness 
is because of environment issue. 
cc [~sseth]

> Fix flaky LLAP unit test (TestSlotZNode)
> 
>
> Key: HIVE-15031
> URL: https://issues.apache.org/jira/browse/HIVE-15031
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-15031.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14968) Fix compilation failure on branch-1

2016-10-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596199#comment-15596199
 ] 

Hive QA commented on HIVE-14968:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834740/HIVE-14968-branch-1.2.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1738/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1738/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1738/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2016-10-21 20:02:14.085
+ [[ -n /usr/lib/jvm/java-7-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-7-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-7-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-1738/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z branch-1.2 ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2016-10-21 20:02:14.088
+ cd apache-github-source-source
+ git fetch origin
fatal: unable to access 'https://github.com/apache/hive.git/': Could not 
resolve host: github.com
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834740 - PreCommit-HIVE-Build

> Fix compilation failure on branch-1
> ---
>
> Key: HIVE-14968
> URL: https://issues.apache.org/jira/browse/HIVE-14968
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Fix For: 1.3.0
>
> Attachments: HIVE-14968-branch-1.1.patch, 
> HIVE-14968-branch-1.2.patch, HIVE-14968.1.patch
>
>
> branch-1 compilation failure due to:
> HIVE-14436: Hive 1.2.1/Hitting "ql.Driver: FAILED: IllegalArgumentException 
> Error: , expected at the end of 'decimal(9'" after enabling 
> hive.optimize.skewjoin and with MR engine
> HIVE-14483 : java.lang.ArrayIndexOutOfBoundsException 
> org.apache.orc.impl.TreeReaderFactory.commonReadByteArrays
> 1.2 branch is fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14968) Fix compilation failure on branch-1

2016-10-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596196#comment-15596196
 ] 

Hive QA commented on HIVE-14968:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834740/HIVE-14968-branch-1.2.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1736/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1736/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1736/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2016-10-21 20:00:59.988
+ [[ -n /usr/lib/jvm/java-7-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-7-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-7-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-1736/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z branch-1.2 ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2016-10-21 20:00:59.991
+ cd apache-github-source-source
+ git fetch origin
fatal: unable to access 'https://github.com/apache/hive.git/': Could not 
resolve host: github.com
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834740 - PreCommit-HIVE-Build

> Fix compilation failure on branch-1
> ---
>
> Key: HIVE-14968
> URL: https://issues.apache.org/jira/browse/HIVE-14968
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Fix For: 1.3.0
>
> Attachments: HIVE-14968-branch-1.1.patch, 
> HIVE-14968-branch-1.2.patch, HIVE-14968.1.patch
>
>
> branch-1 compilation failure due to:
> HIVE-14436: Hive 1.2.1/Hitting "ql.Driver: FAILED: IllegalArgumentException 
> Error: , expected at the end of 'decimal(9'" after enabling 
> hive.optimize.skewjoin and with MR engine
> HIVE-14483 : java.lang.ArrayIndexOutOfBoundsException 
> org.apache.orc.impl.TreeReaderFactory.commonReadByteArrays
> 1.2 branch is fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-15031) Fix flaky LLAP unit test (TestSlotZNode)

2016-10-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596198#comment-15596198
 ] 

Hive QA commented on HIVE-15031:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834738/HIVE-15031.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1737/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1737/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1737/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2016-10-21 20:01:39.701
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-1737/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2016-10-21 20:01:39.703
+ cd apache-github-source-source
+ git fetch origin
fatal: unable to access 'https://github.com/apache/hive.git/': Could not 
resolve host: github.com
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834738 - PreCommit-HIVE-Build

> Fix flaky LLAP unit test (TestSlotZNode)
> 
>
> Key: HIVE-15031
> URL: https://issues.apache.org/jira/browse/HIVE-15031
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-15031.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14803) S3: Stats gathering for insert queries can be expensive for partitioned dataset

2016-10-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596191#comment-15596191
 ] 

Hive QA commented on HIVE-14803:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834626/HIVE-14803.3.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1735/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1735/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1735/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2016-10-21 20:00:19.119
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-1735/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2016-10-21 20:00:19.126
+ cd apache-github-source-source
+ git fetch origin
fatal: unable to access 'https://github.com/apache/hive.git/': Could not 
resolve host: github.com
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834626 - PreCommit-HIVE-Build

> S3: Stats gathering for insert queries can be expensive for partitioned 
> dataset
> ---
>
> Key: HIVE-14803
> URL: https://issues.apache.org/jira/browse/HIVE-14803
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 2.1.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HIVE-14803.1.patch, HIVE-14803.2.patch, 
> HIVE-14803.3.patch
>
>
> StatsTask's aggregateStats populates stats details for all partitions by 
> checking the file sizes which turns out to be expensive when larger number of 
> partitions are inserted. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14964) Failing Test: Fix TestBeelineArgParsing tests

2016-10-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596188#comment-15596188
 ] 

Hive QA commented on HIVE-14964:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834696/HIVE-14964.1.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1734/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1734/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1734/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2016-10-21 19:59:32.400
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-1734/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2016-10-21 19:59:32.403
+ cd apache-github-source-source
+ git fetch origin
fatal: unable to access 'https://github.com/apache/hive.git/': Could not 
resolve host: github.com
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834696 - PreCommit-HIVE-Build

> Failing Test: Fix TestBeelineArgParsing tests
> -
>
> Key: HIVE-14964
> URL: https://issues.apache.org/jira/browse/HIVE-14964
> Project: Hive
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Jason Dere
>Assignee: Zoltan Haindrich
> Attachments: HIVE-14964.1.patch
>
>
> Failing last several builds:
> {noformat}
>  org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] 0.12 
> sec12
>  
> org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
> 29 ms   12
>  org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] 42 ms   
> 12
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14887) Reduce the memory requirements for tests

2016-10-21 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596184#comment-15596184
 ] 

Hive QA commented on HIVE-14887:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12834714/HIVE-14887.06.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/1733/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/1733/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-1733/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2016-10-21 19:58:51.864
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost 
-Dhttp.proxyPort=3128'
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-1733/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2016-10-21 19:58:51.867
+ cd apache-github-source-source
+ git fetch origin
fatal: unable to access 'https://github.com/apache/hive.git/': Could not 
resolve host: github.com
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12834714 - PreCommit-HIVE-Build

> Reduce the memory requirements for tests
> 
>
> Key: HIVE-14887
> URL: https://issues.apache.org/jira/browse/HIVE-14887
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-14887.01.patch, HIVE-14887.02.patch, 
> HIVE-14887.03.patch, HIVE-14887.04.patch, HIVE-14887.05.patch, 
> HIVE-14887.06.patch
>
>
> The clusters that we spin up end up requiring 16GB at times. Also the maven 
> arguments seem a little heavy weight.
> Reducing this will allow for additional ptest drones per box, which should 
> bring down the runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14968) Fix compilation failure on branch-1

2016-10-21 Thread Daniel Dai (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596170#comment-15596170
 ] 

Daniel Dai commented on HIVE-14968:
---

Rerun precommit.

> Fix compilation failure on branch-1
> ---
>
> Key: HIVE-14968
> URL: https://issues.apache.org/jira/browse/HIVE-14968
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Fix For: 1.3.0
>
> Attachments: HIVE-14968-branch-1.1.patch, 
> HIVE-14968-branch-1.2.patch, HIVE-14968.1.patch
>
>
> branch-1 compilation failure due to:
> HIVE-14436: Hive 1.2.1/Hitting "ql.Driver: FAILED: IllegalArgumentException 
> Error: , expected at the end of 'decimal(9'" after enabling 
> hive.optimize.skewjoin and with MR engine
> HIVE-14483 : java.lang.ArrayIndexOutOfBoundsException 
> org.apache.orc.impl.TreeReaderFactory.commonReadByteArrays
> 1.2 branch is fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14968) Fix compilation failure on branch-1

2016-10-21 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-14968:
--
Attachment: HIVE-14968-branch-1.2.patch

> Fix compilation failure on branch-1
> ---
>
> Key: HIVE-14968
> URL: https://issues.apache.org/jira/browse/HIVE-14968
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Fix For: 1.3.0
>
> Attachments: HIVE-14968-branch-1.1.patch, 
> HIVE-14968-branch-1.2.patch, HIVE-14968.1.patch
>
>
> branch-1 compilation failure due to:
> HIVE-14436: Hive 1.2.1/Hitting "ql.Driver: FAILED: IllegalArgumentException 
> Error: , expected at the end of 'decimal(9'" after enabling 
> hive.optimize.skewjoin and with MR engine
> HIVE-14483 : java.lang.ArrayIndexOutOfBoundsException 
> org.apache.orc.impl.TreeReaderFactory.commonReadByteArrays
> 1.2 branch is fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15031) Fix flaky LLAP unit test (TestSlotZNode)

2016-10-21 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-15031:

Status: Patch Available  (was: Open)

> Fix flaky LLAP unit test (TestSlotZNode)
> 
>
> Key: HIVE-15031
> URL: https://issues.apache.org/jira/browse/HIVE-15031
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-15031.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15031) Fix flaky LLAP unit test (TestSlotZNode)

2016-10-21 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-15031:

Attachment: HIVE-15031.patch

Cannot repro locally, so I'll just add some logging and increase the timeout. 
[~prasanth_j] can you please review? thanks

> Fix flaky LLAP unit test (TestSlotZNode)
> 
>
> Key: HIVE-15031
> URL: https://issues.apache.org/jira/browse/HIVE-15031
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-15031.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15031) Fix flaky LLAP unit test (TestSlotZNode)

2016-10-21 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-15031:

Summary: Fix flaky LLAP unit test (TestSlotZNode)  (was: Fix flaky LLAP 
unit tests)

> Fix flaky LLAP unit test (TestSlotZNode)
> 
>
> Key: HIVE-15031
> URL: https://issues.apache.org/jira/browse/HIVE-15031
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14953) don't use globStatus on S3 in MM tables

2016-10-21 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15596046#comment-15596046
 ] 

Sergey Shelukhin commented on HIVE-14953:
-

That only returns files, but we can determine directories from those. I will 
add a configurable option for S3. 

> don't use globStatus on S3 in MM tables
> ---
>
> Key: HIVE-14953
> URL: https://issues.apache.org/jira/browse/HIVE-14953
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Rajesh Balamohan
>Assignee: Sergey Shelukhin
> Fix For: hive-14535
>
> Attachments: HIVE-14953.patch
>
>
> Need to investigate if recursive get is faster. Also, normal listStatus might 
> suffice because MM code handles directory structure in a more definite manner 
> than old code; so it knows where the files of interest are to be found.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-15031) Fix flaky LLAP tests

2016-10-21 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HIVE-15031:
---

Assignee: Sergey Shelukhin

> Fix flaky LLAP tests
> 
>
> Key: HIVE-15031
> URL: https://issues.apache.org/jira/browse/HIVE-15031
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-15031) Fix flaky LLAP unit tests

2016-10-21 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-15031:

Summary: Fix flaky LLAP unit tests  (was: Fix flaky LLAP tests)

> Fix flaky LLAP unit tests
> -
>
> Key: HIVE-15031
> URL: https://issues.apache.org/jira/browse/HIVE-15031
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14968) Fix compilation failure on branch-1

2016-10-21 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15595930#comment-15595930
 ] 

Sergey Shelukhin commented on HIVE-14968:
-

+1

> Fix compilation failure on branch-1
> ---
>
> Key: HIVE-14968
> URL: https://issues.apache.org/jira/browse/HIVE-14968
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Fix For: 1.3.0
>
> Attachments: HIVE-14968-branch-1.1.patch, HIVE-14968.1.patch
>
>
> branch-1 compilation failure due to:
> HIVE-14436: Hive 1.2.1/Hitting "ql.Driver: FAILED: IllegalArgumentException 
> Error: , expected at the end of 'decimal(9'" after enabling 
> hive.optimize.skewjoin and with MR engine
> HIVE-14483 : java.lang.ArrayIndexOutOfBoundsException 
> org.apache.orc.impl.TreeReaderFactory.commonReadByteArrays
> 1.2 branch is fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14968) Fix compilation failure on branch-1

2016-10-21 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15595839#comment-15595839
 ] 

Prasanth Jayachandran commented on HIVE-14968:
--

lgtm, +1

bq. I see a LONG being casted to INT. Isn't going to have overflow issues if 
the LONG value is too large to fit?
batchSize is not expected to be larger (1024 is default and not configurable). 
The cast is required as batchSize is received as long but ensureSize() requires 
the batchSize as int. This is fixed in master so master does not need explicit 
cast but branch-1 will need it. The intent of HIVE-14483 is just to invoke 
ensureSize and the down casting is safe in this context as batchSize is not 
expected to be >1024 inside of hive.

> Fix compilation failure on branch-1
> ---
>
> Key: HIVE-14968
> URL: https://issues.apache.org/jira/browse/HIVE-14968
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Fix For: 1.3.0
>
> Attachments: HIVE-14968-branch-1.1.patch, HIVE-14968.1.patch
>
>
> branch-1 compilation failure due to:
> HIVE-14436: Hive 1.2.1/Hitting "ql.Driver: FAILED: IllegalArgumentException 
> Error: , expected at the end of 'decimal(9'" after enabling 
> hive.optimize.skewjoin and with MR engine
> HIVE-14483 : java.lang.ArrayIndexOutOfBoundsException 
> org.apache.orc.impl.TreeReaderFactory.commonReadByteArrays
> 1.2 branch is fine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-13589) beeline support prompt for password with '-p' option

2016-10-21 Thread Vihang Karajgaonkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vihang Karajgaonkar updated HIVE-13589:
---
Attachment: HIVE-13589.11.patch

Attaching the updated patch with review comments addressed. Thanks for the 
review [~Ferd]

> beeline support prompt for password with '-p' option
> 
>
> Key: HIVE-13589
> URL: https://issues.apache.org/jira/browse/HIVE-13589
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Reporter: Thejas M Nair
>Assignee: Vihang Karajgaonkar
> Fix For: 2.2.0
>
> Attachments: HIVE-13589.1.patch, HIVE-13589.10.patch, 
> HIVE-13589.11.patch, HIVE-13589.2.patch, HIVE-13589.3.patch, 
> HIVE-13589.4.patch, HIVE-13589.5.patch, HIVE-13589.6.patch, 
> HIVE-13589.7.patch, HIVE-13589.8.patch, HIVE-13589.9.patch
>
>
> Specifying connection string using commandline options in beeline is 
> convenient, as it gets saved in shell command history, and it is easy to 
> retrieve it from there.
> However, specifying the password in command prompt is not secure as it gets 
> displayed on screen and saved in the history.
> It should be possible to specify '-p' without an argument to make beeline 
> prompt for password.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14887) Reduce the memory requirements for tests

2016-10-21 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-14887:
--
Attachment: (was: HIVE-14887.06.patch)

> Reduce the memory requirements for tests
> 
>
> Key: HIVE-14887
> URL: https://issues.apache.org/jira/browse/HIVE-14887
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-14887.01.patch, HIVE-14887.02.patch, 
> HIVE-14887.03.patch, HIVE-14887.04.patch, HIVE-14887.05.patch, 
> HIVE-14887.06.patch
>
>
> The clusters that we spin up end up requiring 16GB at times. Also the maven 
> arguments seem a little heavy weight.
> Reducing this will allow for additional ptest drones per box, which should 
> bring down the runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14887) Reduce the memory requirements for tests

2016-10-21 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-14887:
--
Attachment: HIVE-14887.06.patch

> Reduce the memory requirements for tests
> 
>
> Key: HIVE-14887
> URL: https://issues.apache.org/jira/browse/HIVE-14887
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-14887.01.patch, HIVE-14887.02.patch, 
> HIVE-14887.03.patch, HIVE-14887.04.patch, HIVE-14887.05.patch, 
> HIVE-14887.06.patch
>
>
> The clusters that we spin up end up requiring 16GB at times. Also the maven 
> arguments seem a little heavy weight.
> Reducing this will allow for additional ptest drones per box, which should 
> bring down the runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14496) Enable Calcite rewriting with materialized views

2016-10-21 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-14496:
---
Attachment: HIVE-14496.02.patch

> Enable Calcite rewriting with materialized views
> 
>
> Key: HIVE-14496
> URL: https://issues.apache.org/jira/browse/HIVE-14496
> Project: Hive
>  Issue Type: Sub-task
>  Components: Materialized views
>Affects Versions: 2.2.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-14496.01.patch, HIVE-14496.02.patch, 
> HIVE-14496.patch
>
>
> Calcite already supports query rewriting using materialized views. We will 
> use it to support this feature in Hive.
> In order to do that, we need to register the existing materialized views with 
> Calcite view service and enable the materialized views rewriting rules. 
> We should include a HiveConf flag to completely disable query rewriting using 
> materialized views if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14964) Failing Test: Fix TestBeelineArgParsing tests

2016-10-21 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15595672#comment-15595672
 ] 

Siddharth Seth commented on HIVE-14964:
---

Thanks for taking this up [~kgyrtkirk]. The diff in the indentation is a result 
of a try catch block - can't really do much about it. Have re-submitted to 
ptest - lets see what the exception trace shows.

> Failing Test: Fix TestBeelineArgParsing tests
> -
>
> Key: HIVE-14964
> URL: https://issues.apache.org/jira/browse/HIVE-14964
> Project: Hive
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Jason Dere
>Assignee: Zoltan Haindrich
> Attachments: HIVE-14964.1.patch
>
>
> Failing last several builds:
> {noformat}
>  org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] 0.12 
> sec12
>  
> org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
> 29 ms   12
>  org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] 42 ms   
> 12
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14964) Failing Test: Fix TestBeelineArgParsing tests

2016-10-21 Thread Vihang Karajgaonkar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15595631#comment-15595631
 ] 

Vihang Karajgaonkar commented on HIVE-14964:


Hi [~kgyrtkirk] Thanks for the patch. Seems like a good idea. But it is hard to 
see the actually diffs due to indentation mismatch between the patch and 
source. Can you please fix that? Thanks!

> Failing Test: Fix TestBeelineArgParsing tests
> -
>
> Key: HIVE-14964
> URL: https://issues.apache.org/jira/browse/HIVE-14964
> Project: Hive
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Jason Dere
>Assignee: Zoltan Haindrich
> Attachments: HIVE-14964.1.patch
>
>
> Failing last several builds:
> {noformat}
>  org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] 0.12 
> sec12
>  
> org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
> 29 ms   12
>  org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] 42 ms   
> 12
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-14993) make WriteEntity distinguish writeType

2016-10-21 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15594136#comment-15594136
 ] 

Eugene Koifman edited comment on HIVE-14993 at 10/21/16 4:26 PM:
-

from 
https://builds.apache.org/view/H-L/view/Hive/job/PreCommit-HIVE-Build/1701/testReport/
  for HIVE-13589

i.e. the failures for patch 8 are not related to the patch

|Test Name|Duration|Age|
| 
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic]|  
  16 sec  |3|
| 
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[current_date_timestamp]|
  4 sec|  3|
| org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0]  |0.11 
sec|  148|
| 
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
 |27 ms| 148|
| org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1]| 34 ms|  
148|



was (Author: ekoifman):
from 
https://builds.apache.org/view/H-L/view/Hive/job/PreCommit-HIVE-Build/1701/testReport/
  for HIVE-13589

Test Name
Duration
Age
 org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic]  
16 sec  3
 
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[current_date_timestamp]
4 sec   3
 org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0]   0.11 
sec148
 
org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
  27 ms   148
 org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1]   34 ms   
148


> make WriteEntity distinguish writeType
> --
>
> Key: HIVE-14993
> URL: https://issues.apache.org/jira/browse/HIVE-14993
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-14993.2.patch, HIVE-14993.3.patch, 
> HIVE-14993.4.patch, HIVE-14993.5.patch, HIVE-14993.6.patch, 
> HIVE-14993.7.patch, HIVE-14993.8.patch, HIVE-14993.patch, 
> debug.not2checkin.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14887) Reduce the memory requirements for tests

2016-10-21 Thread Siddharth Seth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Seth updated HIVE-14887:
--
Attachment: HIVE-14887.06.patch

> Reduce the memory requirements for tests
> 
>
> Key: HIVE-14887
> URL: https://issues.apache.org/jira/browse/HIVE-14887
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Siddharth Seth
>Assignee: Siddharth Seth
> Attachments: HIVE-14887.01.patch, HIVE-14887.02.patch, 
> HIVE-14887.03.patch, HIVE-14887.04.patch, HIVE-14887.05.patch, 
> HIVE-14887.06.patch
>
>
> The clusters that we spin up end up requiring 16GB at times. Also the maven 
> arguments seem a little heavy weight.
> Reducing this will allow for additional ptest drones per box, which should 
> bring down the runtime.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14913) Add new unit tests

2016-10-21 Thread Siddharth Seth (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15595512#comment-15595512
 ] 

Siddharth Seth commented on HIVE-14913:
---

Can we please have another precommit test run to make sure the tests actually 
passed before committing this?

> Add new unit tests
> --
>
> Key: HIVE-14913
> URL: https://issues.apache.org/jira/browse/HIVE-14913
> Project: Hive
>  Issue Type: Task
>  Components: Tests
>Reporter: Vineet Garg
>Assignee: Vineet Garg
> Fix For: 2.2.0
>
> Attachments: HIVE-14913.1.patch, HIVE-14913.2.patch, 
> HIVE-14913.3.patch, HIVE-14913.4.patch, HIVE-14913.5.patch, 
> HIVE-14913.6.patch, HIVE-14913.7.patch, HIVE-14913.8.patch, HIVE-14913.9.patch
>
>
> Moving bunch of tests from system test to hive unit tests to reduce testing 
> overhead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14964) Failing Test: Fix TestBeelineArgParsing tests

2016-10-21 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-14964:

Attachment: HIVE-14964.1.patch

patch#1 will only make it a bit more clear what happening inside 
{{ClassNameCompleter}}...at least it will show which file has some issues ;)

> Failing Test: Fix TestBeelineArgParsing tests
> -
>
> Key: HIVE-14964
> URL: https://issues.apache.org/jira/browse/HIVE-14964
> Project: Hive
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Jason Dere
>Assignee: Zoltan Haindrich
> Attachments: HIVE-14964.1.patch
>
>
> Failing last several builds:
> {noformat}
>  org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] 0.12 
> sec12
>  
> org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
> 29 ms   12
>  org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] 42 ms   
> 12
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14964) Failing Test: Fix TestBeelineArgParsing tests

2016-10-21 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-14964:

Status: Patch Available  (was: Open)

> Failing Test: Fix TestBeelineArgParsing tests
> -
>
> Key: HIVE-14964
> URL: https://issues.apache.org/jira/browse/HIVE-14964
> Project: Hive
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Jason Dere
>Assignee: Zoltan Haindrich
> Attachments: HIVE-14964.1.patch
>
>
> Failing last several builds:
> {noformat}
>  org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] 0.12 
> sec12
>  
> org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
> 29 ms   12
>  org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] 42 ms   
> 12
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HIVE-14964) Failing Test: Fix TestBeelineArgParsing tests

2016-10-21 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich reassigned HIVE-14964:
---

Assignee: Zoltan Haindrich

> Failing Test: Fix TestBeelineArgParsing tests
> -
>
> Key: HIVE-14964
> URL: https://issues.apache.org/jira/browse/HIVE-14964
> Project: Hive
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Jason Dere
>Assignee: Zoltan Haindrich
>
> Failing last several builds:
> {noformat}
>  org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[0] 0.12 
> sec12
>  
> org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJarWithoutAddDriverClazz[0]
> 29 ms   12
>  org.apache.hive.beeline.TestBeelineArgParsing.testAddLocalJar[1] 42 ms   
> 12
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HIVE-14979) Removing stale Zookeeper locks at HiveServer2 initialization

2016-10-21 Thread Peter Vary (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15595316#comment-15595316
 ] 

Peter Vary edited comment on HIVE-14979 at 10/21/16 2:56 PM:
-

Hi [~sershe],

I am not sure what you mean about "ride over" of locks. By my tests, if the 
HiveServer2 is killed by "kill -9" and restarted, the old locks remain only 
until their timeout expires (max. 20 min). The new HiveServer2 will have a 
different sessionId and will create different locks. So I think, if the session 
timeout is lowered to a reasonable value we might not need the patch in the end 
(it will not hurt to have the extra possibility, but adds complexity and 
another source of error) 

I am not absolutely sure about the LLAP because the code around it is not 
trivial, but I think it uses a different timeout value in 
LlapStatusServiceDriver.run():

{code}
HiveConf.setVar(conf, HiveConf.ConfVars.HIVE_ZOOKEEPER_SESSION_TIMEOUT, (conf
  .getLong(CONFIG_LLAP_ZK_REGISTRY_TIMEOUT_MS, 
CONFIG_LLAP_ZK_REGISTRY_TIMEOUT_MS_DEFAULT) +
  "ms"));
{code}

The default value of CONFIG_LLAP_ZK_REGISTRY_TIMEOUT_MS_DEFAULT is 1 which 
means 10s timeout in the end.

Thanks,
Peter


was (Author: pvary):
Hi [~sershe],

I am not sure what you mean about "ride over" of locks. By my tests, if the 
HiveServer2 is killed by "kill -9" and restarted, the old locks remain only 
until their timeout expires (max. 20 min). The new HiveServer2 will have a 
different sessionId and will create different locks. So I think, if the session 
timeout is lowered to a reasonable value we might not need the patch in the end 
(it will not hurt to have the extra possibility, but adds complexity and 
another source of error) 

I am not absolutely sure about the LLAP because the code around it is not 
trivial, but I think it uses a different timeout value in 
LlapStatusServiceDriver.run():

{code}
HiveConf.setVar(conf, HiveConf.ConfVars.HIVE_ZOOKEEPER_SESSION_TIMEOUT, (conf
  .getLong(CONFIG_LLAP_ZK_REGISTRY_TIMEOUT_MS, 
CONFIG_LLAP_ZK_REGISTRY_TIMEOUT_MS_DEFAULT) +
  "ms"));
{code}

The default value of CONFIG_LLAP_ZK_REGISTRY_TIMEOUT_MS_DEFAULT is 10s.

Thanks,
Peter

> Removing stale Zookeeper locks at HiveServer2 initialization
> 
>
> Key: HIVE-14979
> URL: https://issues.apache.org/jira/browse/HIVE-14979
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Peter Vary
>Assignee: Peter Vary
> Attachments: HIVE-14979.3.patch, HIVE-14979.4.patch, HIVE-14979.patch
>
>
> HiveServer2 could use Zookeeper to store token that indicate that particular 
> tables are locked with the creation of persistent Zookeeper objects. 
> A problem can occur when a HiveServer2 instance creates a lock on a table and 
> the HiveServer2 instances crashes ("Out of Memory" for example) and the locks 
> are not released in Zookeeper. This lock will then remain until it is 
> manually cleared by an admin.
> There should be a way to remove stale locks at HiveServer2 initialization, 
> helping the admins life.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14979) Removing stale Zookeeper locks at HiveServer2 initialization

2016-10-21 Thread Peter Vary (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15595316#comment-15595316
 ] 

Peter Vary commented on HIVE-14979:
---

Hi [~sershe],

I am not sure what you mean about "ride over" of locks. By my tests, if the 
HiveServer2 is killed by "kill -9" and restarted, the old locks remain only 
until their timeout expires (max. 20 min). The new HiveServer2 will have a 
different sessionId and will create different locks. So I think, if the session 
timeout is lowered to a reasonable value we might not need the patch in the end 
(it will not hurt to have the extra possibility, but adds complexity and 
another source of error) 

I am not absolutely sure about the LLAP because the code around it is not 
trivial, but I think it uses a different timeout value in 
LlapStatusServiceDriver.run():

{code}
HiveConf.setVar(conf, HiveConf.ConfVars.HIVE_ZOOKEEPER_SESSION_TIMEOUT, (conf
  .getLong(CONFIG_LLAP_ZK_REGISTRY_TIMEOUT_MS, 
CONFIG_LLAP_ZK_REGISTRY_TIMEOUT_MS_DEFAULT) +
  "ms"));
{code}

The default value of CONFIG_LLAP_ZK_REGISTRY_TIMEOUT_MS_DEFAULT is 10s.

Thanks,
Peter

> Removing stale Zookeeper locks at HiveServer2 initialization
> 
>
> Key: HIVE-14979
> URL: https://issues.apache.org/jira/browse/HIVE-14979
> Project: Hive
>  Issue Type: Improvement
>  Components: Locking
>Reporter: Peter Vary
>Assignee: Peter Vary
> Attachments: HIVE-14979.3.patch, HIVE-14979.4.patch, HIVE-14979.patch
>
>
> HiveServer2 could use Zookeeper to store token that indicate that particular 
> tables are locked with the creation of persistent Zookeeper objects. 
> A problem can occur when a HiveServer2 instance creates a lock on a table and 
> the HiveServer2 instances crashes ("Out of Memory" for example) and the locks 
> are not released in Zookeeper. This lock will then remain until it is 
> manually cleared by an admin.
> There should be a way to remove stale locks at HiveServer2 initialization, 
> helping the admins life.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14924) MSCK REPAIR table with single threaded is throwing null pointer exception

2016-10-21 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15595255#comment-15595255
 ] 

Ashutosh Chauhan commented on HIVE-14924:
-

[~pxiong] Re-attach patch to trigger QA?

> MSCK REPAIR table with single threaded is throwing null pointer exception
> -
>
> Key: HIVE-14924
> URL: https://issues.apache.org/jira/browse/HIVE-14924
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 2.2.0
>Reporter: Ratheesh Kamoor
>Assignee: Pengcheng Xiong
> Attachments: HIVE-14924.01.patch
>
>
> MSCK REPAIR TABLE is throwing Null Pointer Exception while running on single 
> threaded mode (hive.mv.files.thread=0)
> Error:
> 2016-10-10T22:27:13,564 ERROR [e9ce04a8-2a84-426d-8e79-a2d15b8cee09 
> main([])]: exec.DDLTask (DDLTask.java:failed(581)) - 
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.checkPartitionDirs(HiveMetaStoreChecker.java:423)
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.findUnknownPartitions(HiveMetaStoreChecker.java:315)
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.checkTable(HiveMetaStoreChecker.java:291)
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.checkTable(HiveMetaStoreChecker.java:236)
>   at 
> org.apache.hadoop.hive.ql.metadata.HiveMetaStoreChecker.checkMetastore(HiveMetaStoreChecker.java:113)
>   at org.apache.hadoop.hive.ql.exec.DDLTask.msck(DDLTask.java:1834)
> In order to reproduce:
> set hive.mv.files.thread=0 and run MSCK REPAIR TABLE command



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11208) Can not drop a default partition __HIVE_DEFAULT_PARTITION__ which is not a "string" type

2016-10-21 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-11208:

Attachment: HIVE-11208.1.patch

> Can not drop a default partition __HIVE_DEFAULT_PARTITION__ which is not a 
> "string" type
> 
>
> Key: HIVE-11208
> URL: https://issues.apache.org/jira/browse/HIVE-11208
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Affects Versions: 1.1.0
>Reporter: Yongzhi Chen
>Assignee: Aihua Xu
> Attachments: HIVE-11208.1.patch
>
>
> When partition is not a string type, for example, if it is a int type, when 
> drop the default partition __HIVE_DEFAULT_PARTITION__, you will get:
> SemanticException Unexpected unknown partitions
> Reproduce:
> {noformat}
> SET hive.exec.dynamic.partition=true;
> SET hive.exec.dynamic.partition.mode=nonstrict;
> set hive.exec.max.dynamic.partitions.pernode=1;
> DROP TABLE IF EXISTS test;
> CREATE TABLE test (col1 string) PARTITIONED BY (p1 int) ROW FORMAT DELIMITED 
> FIELDS TERMINATED BY '\001' STORED AS TEXTFILE;
> INSERT OVERWRITE TABLE test PARTITION (p1) SELECT code, IF(salary > 600, 100, 
> null) as p1 FROM jsmall;
> hive> SHOW PARTITIONS test;
> OK
> p1=100
> p1=__HIVE_DEFAULT_PARTITION__
> Time taken: 0.124 seconds, Fetched: 2 row(s)
> hive> ALTER TABLE test DROP partition (p1 = '__HIVE_DEFAULT_PARTITION__');
> FAILED: SemanticException Unexpected unknown partitions for (p1 = null)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11208) Can not drop a default partition __HIVE_DEFAULT_PARTITION__ which is not a "string" type

2016-10-21 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-11208:

Status: Patch Available  (was: In Progress)

> Can not drop a default partition __HIVE_DEFAULT_PARTITION__ which is not a 
> "string" type
> 
>
> Key: HIVE-11208
> URL: https://issues.apache.org/jira/browse/HIVE-11208
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Affects Versions: 1.1.0
>Reporter: Yongzhi Chen
>Assignee: Aihua Xu
> Attachments: HIVE-11208.1.patch
>
>
> When partition is not a string type, for example, if it is a int type, when 
> drop the default partition __HIVE_DEFAULT_PARTITION__, you will get:
> SemanticException Unexpected unknown partitions
> Reproduce:
> {noformat}
> SET hive.exec.dynamic.partition=true;
> SET hive.exec.dynamic.partition.mode=nonstrict;
> set hive.exec.max.dynamic.partitions.pernode=1;
> DROP TABLE IF EXISTS test;
> CREATE TABLE test (col1 string) PARTITIONED BY (p1 int) ROW FORMAT DELIMITED 
> FIELDS TERMINATED BY '\001' STORED AS TEXTFILE;
> INSERT OVERWRITE TABLE test PARTITION (p1) SELECT code, IF(salary > 600, 100, 
> null) as p1 FROM jsmall;
> hive> SHOW PARTITIONS test;
> OK
> p1=100
> p1=__HIVE_DEFAULT_PARTITION__
> Time taken: 0.124 seconds, Fetched: 2 row(s)
> hive> ALTER TABLE test DROP partition (p1 = '__HIVE_DEFAULT_PARTITION__');
> FAILED: SemanticException Unexpected unknown partitions for (p1 = null)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11208) Can not drop a default partition __HIVE_DEFAULT_PARTITION__ which is not a "string" type

2016-10-21 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-11208:

Status: In Progress  (was: Patch Available)

> Can not drop a default partition __HIVE_DEFAULT_PARTITION__ which is not a 
> "string" type
> 
>
> Key: HIVE-11208
> URL: https://issues.apache.org/jira/browse/HIVE-11208
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Affects Versions: 1.1.0
>Reporter: Yongzhi Chen
>Assignee: Aihua Xu
>
> When partition is not a string type, for example, if it is a int type, when 
> drop the default partition __HIVE_DEFAULT_PARTITION__, you will get:
> SemanticException Unexpected unknown partitions
> Reproduce:
> {noformat}
> SET hive.exec.dynamic.partition=true;
> SET hive.exec.dynamic.partition.mode=nonstrict;
> set hive.exec.max.dynamic.partitions.pernode=1;
> DROP TABLE IF EXISTS test;
> CREATE TABLE test (col1 string) PARTITIONED BY (p1 int) ROW FORMAT DELIMITED 
> FIELDS TERMINATED BY '\001' STORED AS TEXTFILE;
> INSERT OVERWRITE TABLE test PARTITION (p1) SELECT code, IF(salary > 600, 100, 
> null) as p1 FROM jsmall;
> hive> SHOW PARTITIONS test;
> OK
> p1=100
> p1=__HIVE_DEFAULT_PARTITION__
> Time taken: 0.124 seconds, Fetched: 2 row(s)
> hive> ALTER TABLE test DROP partition (p1 = '__HIVE_DEFAULT_PARTITION__');
> FAILED: SemanticException Unexpected unknown partitions for (p1 = null)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-11208) Can not drop a default partition __HIVE_DEFAULT_PARTITION__ which is not a "string" type

2016-10-21 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-11208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-11208:

Attachment: (was: HIVE-11208.1.patch)

> Can not drop a default partition __HIVE_DEFAULT_PARTITION__ which is not a 
> "string" type
> 
>
> Key: HIVE-11208
> URL: https://issues.apache.org/jira/browse/HIVE-11208
> Project: Hive
>  Issue Type: Bug
>  Components: Parser
>Affects Versions: 1.1.0
>Reporter: Yongzhi Chen
>Assignee: Aihua Xu
> Attachments: HIVE-11208.1.patch
>
>
> When partition is not a string type, for example, if it is a int type, when 
> drop the default partition __HIVE_DEFAULT_PARTITION__, you will get:
> SemanticException Unexpected unknown partitions
> Reproduce:
> {noformat}
> SET hive.exec.dynamic.partition=true;
> SET hive.exec.dynamic.partition.mode=nonstrict;
> set hive.exec.max.dynamic.partitions.pernode=1;
> DROP TABLE IF EXISTS test;
> CREATE TABLE test (col1 string) PARTITIONED BY (p1 int) ROW FORMAT DELIMITED 
> FIELDS TERMINATED BY '\001' STORED AS TEXTFILE;
> INSERT OVERWRITE TABLE test PARTITION (p1) SELECT code, IF(salary > 600, 100, 
> null) as p1 FROM jsmall;
> hive> SHOW PARTITIONS test;
> OK
> p1=100
> p1=__HIVE_DEFAULT_PARTITION__
> Time taken: 0.124 seconds, Fetched: 2 row(s)
> hive> ALTER TABLE test DROP partition (p1 = '__HIVE_DEFAULT_PARTITION__');
> FAILED: SemanticException Unexpected unknown partitions for (p1 = null)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14984) Hive-WebUI access results in Request is a replay (34) attack

2016-10-21 Thread Barna Zsombor Klara (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barna Zsombor Klara updated HIVE-14984:
---
Attachment: HIVE-14984.patch

Reuploading as it seems the precommit tests didn't run.

> Hive-WebUI access results in Request is a replay (34) attack
> 
>
> Key: HIVE-14984
> URL: https://issues.apache.org/jira/browse/HIVE-14984
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.2.0
>Reporter: Venkat Sambath
>Assignee: Barna Zsombor Klara
> Attachments: HIVE-14984.patch
>
>
> When trying to access kerberized webui of HS2, The following error is received
> GSSException: Failure unspecified at GSS-API level (Mechanism level: Request 
> is a replay (34))
> While this is not happening for RM webui (checked if kerberos webui is 
> enabled)
> To reproduce the issue 
> Try running
> curl --negotiate -u : -b ~/cookiejar.txt -c ~/cookiejar.txt 
> http://:10002/
> from any cluster nodes
> or 
> Try accessing the URL from a VM with windows machine and firefox browser to 
> replicate the issue
> The following workaround helped, but need a permanent solution for the bug
> Workaround:
> =
> First access the index.html directly and then actual URL of webui
> curl --negotiate -u : -b ~/cookiejar.txt -c ~/cookiejar.txt 
> http://:10002/index.html
> curl --negotiate -u : -b ~/cookiejar.txt -c ~/cookiejar.txt 
> http://:10002
> In browser:
> First access
> http://:10002/index.html
> then
> http://:10002



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-14984) Hive-WebUI access results in Request is a replay (34) attack

2016-10-21 Thread Barna Zsombor Klara (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-14984?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Barna Zsombor Klara updated HIVE-14984:
---
Attachment: (was: HIVE-14984.patch)

> Hive-WebUI access results in Request is a replay (34) attack
> 
>
> Key: HIVE-14984
> URL: https://issues.apache.org/jira/browse/HIVE-14984
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.2.0
>Reporter: Venkat Sambath
>Assignee: Barna Zsombor Klara
> Attachments: HIVE-14984.patch
>
>
> When trying to access kerberized webui of HS2, The following error is received
> GSSException: Failure unspecified at GSS-API level (Mechanism level: Request 
> is a replay (34))
> While this is not happening for RM webui (checked if kerberos webui is 
> enabled)
> To reproduce the issue 
> Try running
> curl --negotiate -u : -b ~/cookiejar.txt -c ~/cookiejar.txt 
> http://:10002/
> from any cluster nodes
> or 
> Try accessing the URL from a VM with windows machine and firefox browser to 
> replicate the issue
> The following workaround helped, but need a permanent solution for the bug
> Workaround:
> =
> First access the index.html directly and then actual URL of webui
> curl --negotiate -u : -b ~/cookiejar.txt -c ~/cookiejar.txt 
> http://:10002/index.html
> curl --negotiate -u : -b ~/cookiejar.txt -c ~/cookiejar.txt 
> http://:10002
> In browser:
> First access
> http://:10002/index.html
> then
> http://:10002



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-14913) Add new unit tests

2016-10-21 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15595151#comment-15595151
 ] 

Ashutosh Chauhan commented on HIVE-14913:
-

Pushed to master.

> Add new unit tests
> --
>
> Key: HIVE-14913
> URL: https://issues.apache.org/jira/browse/HIVE-14913
> Project: Hive
>  Issue Type: Task
>  Components: Tests
>Reporter: Vineet Garg
>Assignee: Vineet Garg
> Fix For: 2.2.0
>
> Attachments: HIVE-14913.1.patch, HIVE-14913.2.patch, 
> HIVE-14913.3.patch, HIVE-14913.4.patch, HIVE-14913.5.patch, 
> HIVE-14913.6.patch, HIVE-14913.7.patch, HIVE-14913.8.patch, HIVE-14913.9.patch
>
>
> Moving bunch of tests from system test to hive unit tests to reduce testing 
> overhead



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >