[jira] [Commented] (HIVE-17544) Provide classname info for function authorization

2017-10-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187657#comment-16187657
 ] 

Hive QA commented on HIVE-17544:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12889903/HIVE-17544.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 11 failed/errored test(s), 11193 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_predicate_pushdown]
 (batchId=232)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_single_sourced_multi_insert]
 (batchId=232)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[optimize_nullscan]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1]
 (batchId=171)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_2] 
(batchId=101)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[repl_load_requires_admin]
 (batchId=91)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query14] 
(batchId=240)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=203)
org.apache.hive.jdbc.authorization.TestJdbcMetadataApiAuth.testMetaApiAllowed 
(batchId=230)
org.apache.hive.jdbc.authorization.TestJdbcMetadataApiAuth.testMetaApiDisAllowed
 (batchId=230)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7069/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7069/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7069/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 11 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12889903 - PreCommit-HIVE-Build

> Provide classname info for function authorization
> -
>
> Key: HIVE-17544
> URL: https://issues.apache.org/jira/browse/HIVE-17544
> Project: Hive
>  Issue Type: Task
>  Components: Authorization
>Affects Versions: 2.1.1
>Reporter: Na Li
>Assignee: Aihua Xu
>Priority: Critical
> Attachments: HIVE-17544.1.patch, HIVE-17544.2.patch
>
>
> Right now, for authorization 2, the 
> HiveAuthorizationValidator.checkPrivileges(HiveOperationType var1, 
> List var2, List var3, 
> HiveAuthzContext var4) does not contain the parsed sql command string as 
> input. Therefore, Sentry has to parse the command again.
> The API should be changed to include all required information as input, so 
> Sentry does not need to parse the sql command string again.
> known situations:
> 1) when dropping a database which does not exist, hive should not call sentry 
> or it calls sentry with database name as input
> 2) when creating function, hive should provide UDF class name as input.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17544) Provide classname info for function authorization

2017-10-01 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-17544:

Attachment: HIVE-17544.2.patch

> Provide classname info for function authorization
> -
>
> Key: HIVE-17544
> URL: https://issues.apache.org/jira/browse/HIVE-17544
> Project: Hive
>  Issue Type: Task
>  Components: Authorization
>Affects Versions: 2.1.1
>Reporter: Na Li
>Assignee: Aihua Xu
>Priority: Critical
> Attachments: HIVE-17544.1.patch, HIVE-17544.2.patch
>
>
> Right now, for authorization 2, the 
> HiveAuthorizationValidator.checkPrivileges(HiveOperationType var1, 
> List var2, List var3, 
> HiveAuthzContext var4) does not contain the parsed sql command string as 
> input. Therefore, Sentry has to parse the command again.
> The API should be changed to include all required information as input, so 
> Sentry does not need to parse the sql command string again.
> known situations:
> 1) when dropping a database which does not exist, hive should not call sentry 
> or it calls sentry with database name as input
> 2) when creating function, hive should provide UDF class name as input.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-16395) ConcurrentModificationException on config object in HoS

2017-10-01 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187634#comment-16187634
 ] 

Sahil Takiar commented on HIVE-16395:
-

3.5 ms isn't very long. I say we set it to true by default in HoS. We'll have 
to add code thats enables it by default, but if any Hive configuration file 
({{hive-site.xml}}) explicitly sets it to {{false}} we'll have to honor that 
setting and disable it.

> ConcurrentModificationException on config object in HoS
> ---
>
> Key: HIVE-16395
> URL: https://issues.apache.org/jira/browse/HIVE-16395
> Project: Hive
>  Issue Type: Task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>
> Looks like this is happening inside spark executors, looks to be some race 
> condition when modifying {{Configuration}} objects.
> Stack-Trace:
> {code}
> java.io.IOException: java.lang.reflect.InvocationTargetException
>   at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
>   at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57)
>   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:267)
>   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.(HadoopShimsSecure.java:213)
>   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getRecordReader(HadoopShimsSecure.java:334)
>   at 
> org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:682)
>   at org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:240)
>   at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:211)
>   at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>   at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>   at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:87)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
>   at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>   at org.apache.spark.scheduler.Task.run(Task.scala:89)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:242)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:253)
>   ... 21 more
> Caused by: java.util.ConcurrentModificationException
>   at java.util.Hashtable$Enumerator.next(Hashtable.java:1167)
>   at 
> org.apache.hadoop.conf.Configuration.iterator(Configuration.java:2455)
>   at 
> org.apache.hadoop.fs.s3a.S3AUtils.propagateBucketOptions(S3AUtils.java:716)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:181)
>   at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2815)
>   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:98)
>   at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2852)
>   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2834)
>   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:387)
>   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
>   at 
> org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:108)
>   at 
> org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
>   at 
> org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:68)
>   ... 26 more
> {code}



--
This message was sent by Atlassian JIRA

[jira] [Commented] (HIVE-17602) Explain plan not working

2017-10-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187600#comment-16187600
 ] 

Hive QA commented on HIVE-17602:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12889895/HIVE-17602.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 11186 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_predicate_pushdown]
 (batchId=232)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_single_sourced_multi_insert]
 (batchId=232)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[optimize_nullscan]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1]
 (batchId=171)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query14] 
(batchId=240)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=203)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7068/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7068/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7068/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 7 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12889895 - PreCommit-HIVE-Build

> Explain plan not working
> 
>
> Key: HIVE-17602
> URL: https://issues.apache.org/jira/browse/HIVE-17602
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 3.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-17602.1.patch, HIVE-17602.2.patch
>
>
> {code:sql}
> hive> CREATE TABLE src (key STRING COMMENT 'default', value STRING COMMENT 
> 'default') STORED AS TEXTFILE;
> hive> explain select * from src where key > '4';
> Failed with exception wrong number of arguments
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.ExplainTask
> {code}
> Error stack in hive.log
> {noformat}
> 2017-09-25T21:18:59,591 ERROR [726b5e51-f470-4a79-be8c-95b82a6aa85d main] 
> exec.Task: Failed with exception wrong number of arguments
> java.lang.IllegalArgumentException: wrong number of arguments
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:896)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:774)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:668)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:797)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:668)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputList(ExplainTask.java:635)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:968)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:668)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputMap(ExplainTask.java:569)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:954)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:668)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:1052)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputStagePlans(ExplainTask.java:1197)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.getJSONPlan(ExplainTask.java:275)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.getJSONPlan(ExplainTask.java:220)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.execute(ExplainTask.java:368)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:204)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2190)
>   at 

[jira] [Commented] (HIVE-16511) CBO looses inner casts on constants of complex type

2017-10-01 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187575#comment-16187575
 ] 

Hive QA commented on HIVE-16511:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12889894/HIVE-16511.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 21 failed/errored test(s), 11187 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_predicate_pushdown]
 (batchId=232)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_single_sourced_multi_insert]
 (batchId=232)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[constprog_when_case] 
(batchId=56)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_sort_1_23] 
(batchId=76)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_coalesce] 
(batchId=10)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_coalesce_3] 
(batchId=55)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[optimize_nullscan]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_fast_stats]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_coalesce]
 (batchId=148)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_coalesce_3]
 (batchId=158)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_groupby_grouping_sets_grouping]
 (batchId=148)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_ptf_part_simple]
 (batchId=154)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_explainuser_1]
 (batchId=171)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query36] 
(batchId=242)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query70] 
(batchId=242)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query86] 
(batchId=242)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query14] 
(batchId=240)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query36] 
(batchId=240)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query70] 
(batchId=240)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query86] 
(batchId=240)
org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut 
(batchId=203)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/7067/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/7067/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-7067/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 21 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12889894 - PreCommit-HIVE-Build

> CBO looses inner casts on constants of complex type
> ---
>
> Key: HIVE-16511
> URL: https://issues.apache.org/jira/browse/HIVE-16511
> Project: Hive
>  Issue Type: Bug
>  Components: CBO, Query Planning
>Reporter: Ashutosh Chauhan
>Assignee: Vineet Garg
> Attachments: HIVE-16511.1.patch
>
>
> type for map <10, cast(null as int)> becomes map 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17602) Explain plan not working

2017-10-01 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-17602:
---
Status: Open  (was: Patch Available)

> Explain plan not working
> 
>
> Key: HIVE-17602
> URL: https://issues.apache.org/jira/browse/HIVE-17602
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 3.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-17602.1.patch, HIVE-17602.2.patch
>
>
> {code:sql}
> hive> CREATE TABLE src (key STRING COMMENT 'default', value STRING COMMENT 
> 'default') STORED AS TEXTFILE;
> hive> explain select * from src where key > '4';
> Failed with exception wrong number of arguments
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.ExplainTask
> {code}
> Error stack in hive.log
> {noformat}
> 2017-09-25T21:18:59,591 ERROR [726b5e51-f470-4a79-be8c-95b82a6aa85d main] 
> exec.Task: Failed with exception wrong number of arguments
> java.lang.IllegalArgumentException: wrong number of arguments
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:896)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:774)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:668)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:797)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:668)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputList(ExplainTask.java:635)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:968)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:668)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputMap(ExplainTask.java:569)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:954)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:668)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:1052)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputStagePlans(ExplainTask.java:1197)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.getJSONPlan(ExplainTask.java:275)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.getJSONPlan(ExplainTask.java:220)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.execute(ExplainTask.java:368)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:204)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2190)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1832)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1549)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1304)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1294)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:409)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:827)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:765)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:692)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17602) Explain plan not working

2017-10-01 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-17602:
---
Attachment: HIVE-17602.2.patch

> Explain plan not working
> 
>
> Key: HIVE-17602
> URL: https://issues.apache.org/jira/browse/HIVE-17602
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 3.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-17602.1.patch, HIVE-17602.2.patch
>
>
> {code:sql}
> hive> CREATE TABLE src (key STRING COMMENT 'default', value STRING COMMENT 
> 'default') STORED AS TEXTFILE;
> hive> explain select * from src where key > '4';
> Failed with exception wrong number of arguments
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.ExplainTask
> {code}
> Error stack in hive.log
> {noformat}
> 2017-09-25T21:18:59,591 ERROR [726b5e51-f470-4a79-be8c-95b82a6aa85d main] 
> exec.Task: Failed with exception wrong number of arguments
> java.lang.IllegalArgumentException: wrong number of arguments
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:896)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:774)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:668)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:797)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:668)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputList(ExplainTask.java:635)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:968)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:668)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputMap(ExplainTask.java:569)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:954)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:668)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:1052)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputStagePlans(ExplainTask.java:1197)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.getJSONPlan(ExplainTask.java:275)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.getJSONPlan(ExplainTask.java:220)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.execute(ExplainTask.java:368)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:204)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2190)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1832)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1549)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1304)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1294)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:409)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:827)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:765)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:692)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17602) Explain plan not working

2017-10-01 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-17602:
---
Status: Patch Available  (was: Open)

> Explain plan not working
> 
>
> Key: HIVE-17602
> URL: https://issues.apache.org/jira/browse/HIVE-17602
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Affects Versions: 3.0.0
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-17602.1.patch, HIVE-17602.2.patch
>
>
> {code:sql}
> hive> CREATE TABLE src (key STRING COMMENT 'default', value STRING COMMENT 
> 'default') STORED AS TEXTFILE;
> hive> explain select * from src where key > '4';
> Failed with exception wrong number of arguments
> FAILED: Execution Error, return code 1 from 
> org.apache.hadoop.hive.ql.exec.ExplainTask
> {code}
> Error stack in hive.log
> {noformat}
> 2017-09-25T21:18:59,591 ERROR [726b5e51-f470-4a79-be8c-95b82a6aa85d main] 
> exec.Task: Failed with exception wrong number of arguments
> java.lang.IllegalArgumentException: wrong number of arguments
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:896)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:774)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:668)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:797)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:668)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputList(ExplainTask.java:635)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:968)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:668)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputMap(ExplainTask.java:569)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:954)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:668)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputPlan(ExplainTask.java:1052)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.outputStagePlans(ExplainTask.java:1197)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.getJSONPlan(ExplainTask.java:275)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.getJSONPlan(ExplainTask.java:220)
>   at 
> org.apache.hadoop.hive.ql.exec.ExplainTask.execute(ExplainTask.java:368)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:204)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2190)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1832)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1549)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1304)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1294)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:187)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:409)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:827)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:765)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:692)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:234)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-16511) CBO looses inner casts on constants of complex type

2017-10-01 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-16511:
---
Status: Patch Available  (was: Open)

> CBO looses inner casts on constants of complex type
> ---
>
> Key: HIVE-16511
> URL: https://issues.apache.org/jira/browse/HIVE-16511
> Project: Hive
>  Issue Type: Bug
>  Components: CBO, Query Planning
>Reporter: Ashutosh Chauhan
>Assignee: Vineet Garg
> Attachments: HIVE-16511.1.patch
>
>
> type for map <10, cast(null as int)> becomes map 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-16511) CBO looses inner casts on constants of complex type

2017-10-01 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-16511:
---
Attachment: HIVE-16511.1.patch

> CBO looses inner casts on constants of complex type
> ---
>
> Key: HIVE-16511
> URL: https://issues.apache.org/jira/browse/HIVE-16511
> Project: Hive
>  Issue Type: Bug
>  Components: CBO, Query Planning
>Reporter: Ashutosh Chauhan
>Assignee: Vineet Garg
> Attachments: HIVE-16511.1.patch
>
>
> type for map <10, cast(null as int)> becomes map 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HIVE-16511) CBO looses inner casts on constants of complex type

2017-10-01 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg reassigned HIVE-16511:
--

Assignee: Vineet Garg

> CBO looses inner casts on constants of complex type
> ---
>
> Key: HIVE-16511
> URL: https://issues.apache.org/jira/browse/HIVE-16511
> Project: Hive
>  Issue Type: Bug
>  Components: CBO, Query Planning
>Reporter: Ashutosh Chauhan
>Assignee: Vineet Garg
> Attachments: HIVE-16511.1.patch
>
>
> type for map <10, cast(null as int)> becomes map 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17550) Remove unreferenced q.out-s

2017-10-01 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187544#comment-16187544
 ] 

Vineet Garg commented on HIVE-17550:


[~kgyrtkirk] I just realized your patch also deleted q.out files for disabled 
tests. We should not be deleting those since those tests are temporary disabled 
and on re-enabling it is good to compare the output for changes. e.g. 
{min_structvalue.q}

> Remove unreferenced q.out-s
> ---
>
> Key: HIVE-17550
> URL: https://issues.apache.org/jira/browse/HIVE-17550
> Project: Hive
>  Issue Type: Improvement
>  Components: Tests
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
> Fix For: 3.0.0
>
> Attachments: HIVE-17550.01.patch
>
>
> it's sometimes a bit misleading to see q.out-s which are never even used..
> I'll also add a small utility which is able to remove them - and add a test 
> which will help to avoid them in the future



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-16898) Validation of source file after distcp in repl load

2017-10-01 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187523#comment-16187523
 ] 

Sankar Hariappan commented on HIVE-16898:
-

[~thejas]
My commit setup still have problems. Couldn't commit myself.
Can you please commit this patch?

> Validation of source file after distcp in repl load 
> 
>
> Key: HIVE-16898
> URL: https://issues.apache.org/jira/browse/HIVE-16898
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>  Labels: pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-16898.1.patch, HIVE-16898.2.patch, 
> HIVE-16898.3.patch, HIVE-16898.4.patch, HIVE-16898.5.patch, 
> HIVE-16898.6.patch, HIVE-16898.7.patch, HIVE-16898.8.patch
>
>
> time between deciding the source and destination path for distcp to invoking 
> of distcp can have a change of the source file, hence distcp might copy the 
> wrong file to destination, hence we should an additional check on the 
> checksum of the source file path after distcp finishes to make sure the path 
> didnot change during the copy process. if it has take additional steps to 
> delete the previous file on destination and copy the new source and repeat 
> the same process as above till we copy the correct file. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17606) Improve security for DB notification related APIs

2017-10-01 Thread Tao Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187465#comment-16187465
 ] 

Tao Li commented on HIVE-17606:
---

Added a section here: 
https://cwiki.apache.org/confluence/display/Hive/HiveReplicationv2Development#HiveReplicationv2Development-MetastorenotificationAPIsecurity

[~leftylev] Please let me know if you have any suggestions to improve it. 
Thanks!

> Improve security for DB notification related APIs
> -
>
> Key: HIVE-17606
> URL: https://issues.apache.org/jira/browse/HIVE-17606
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Tao Li
>Assignee: Tao Li
> Fix For: 3.0.0
>
> Attachments: HIVE-17606.10.patch, HIVE-17606.1.patch, 
> HIVE-17606.2.patch, HIVE-17606.3.patch, HIVE-17606.4.patch, 
> HIVE-17606.5.patch, HIVE-17606.6.patch, HIVE-17606.7.patch, 
> HIVE-17606.8.patch, HIVE-17606.9.patch
>
>
> The purpose is to make sure only the superusers which are specified in the 
> proxyuser settings can make the db notification related API calls, since this 
> is supposed to be called by superuser/admin instead of any end user.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-13843) Re-enable the HoS tests disabled in HIVE-13402

2017-10-01 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13843?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187310#comment-16187310
 ] 

Rui Li commented on HIVE-13843:
---

[~stakiar], sorry about the delay. The patch LGTM, +1.
I think the diff of {{ppd_join4.q}} is due to HIVE-6348 :)

> Re-enable the HoS tests disabled in HIVE-13402
> --
>
> Key: HIVE-13843
> URL: https://issues.apache.org/jira/browse/HIVE-13843
> Project: Hive
>  Issue Type: Test
>Reporter: Rui Li
>Assignee: Sahil Takiar
> Attachments: HIVE-13843.1.patch
>
>
> With HIVE-13525, we can now fix and re-enable the tests for Spark.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-16898) Validation of source file after distcp in repl load

2017-10-01 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187302#comment-16187302
 ] 

Thejas M Nair commented on HIVE-16898:
--

+1
Please go ahead and commit. If you have any setup issues with commit, let me 
know, I can do it.

> Validation of source file after distcp in repl load 
> 
>
> Key: HIVE-16898
> URL: https://issues.apache.org/jira/browse/HIVE-16898
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: Sankar Hariappan
>  Labels: pull-request-available
> Fix For: 3.0.0
>
> Attachments: HIVE-16898.1.patch, HIVE-16898.2.patch, 
> HIVE-16898.3.patch, HIVE-16898.4.patch, HIVE-16898.5.patch, 
> HIVE-16898.6.patch, HIVE-16898.7.patch, HIVE-16898.8.patch
>
>
> time between deciding the source and destination path for distcp to invoking 
> of distcp can have a change of the source file, hence distcp might copy the 
> wrong file to destination, hence we should an additional check on the 
> checksum of the source file path after distcp finishes to make sure the path 
> didnot change during the copy process. if it has take additional steps to 
> delete the previous file on destination and copy the new source and repeat 
> the same process as above till we copy the correct file. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-16895) Multi-threaded execution of bootstrap dump of partitions

2017-10-01 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187297#comment-16187297
 ] 

Lefty Leverenz commented on HIVE-16895:
---

Doc update:  HIVE-17625 changes the default value to 100, also in release 3.0.0.

The wiki has been updated:

* [hive.repl.partitions.dump.parallelism | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.repl.partitions.dump.parallelism]

>  Multi-threaded execution of bootstrap dump of partitions
> -
>
> Key: HIVE-16895
> URL: https://issues.apache.org/jira/browse/HIVE-16895
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: anishek
> Fix For: 3.0.0
>
> Attachments: HIVE-16895.1.patch, HIVE-16895.2.patch
>
>
> to allow faster execution of bootstrap dump phase we dump multiple partitions 
> from same table simultaneously. 
> even though dumping  functions is  not going to be a blocker, moving to 
> similar execution modes for all metastore objects will make code more 
> coherent. 
> Bootstrap dump at db level does :
> * boostrap of all tables
> ** boostrap of all partitions in a table.  (scope of current jira) 
> * boostrap of all functions 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17625) Replication: update hive.repl.partitions.dump.parallelism to 100

2017-10-01 Thread Lefty Leverenz (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16187295#comment-16187295
 ] 

Lefty Leverenz commented on HIVE-17625:
---

Doc note:  The wiki has been updated for the new default value of 
*hive.repl.partitions.dump.parallelism*, which was added by HIVE-16895 in the 
same release (3.0.0).

* [Configuration Properties -- hive.repl.partitions.dump.parallelism | 
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.repl.partitions.dump.parallelism]



> Replication: update hive.repl.partitions.dump.parallelism to 100
> 
>
> Key: HIVE-17625
> URL: https://issues.apache.org/jira/browse/HIVE-17625
> Project: Hive
>  Issue Type: Bug
>  Components: repl
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Fix For: 3.0.0
>
> Attachments: HIVE-17625.1.patch
>
>
> Set hive.repl.partitions.dump.parallelism=100



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)