[jira] [Commented] (HIVE-19220) perfLogger = SessionState.getPerfLogger() in Driver.java need to combine

2018-04-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440408#comment-16440408
 ] 

Hive QA commented on HIVE-19220:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12919174/HIVE-19220.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 109 failed/errored test(s), 13840 tests 
executed
*Failed tests:*
{noformat}
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=96)


[jira] [Updated] (HIVE-19217) Upgrade to Hadoop 3.1.0

2018-04-16 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-19217:

Attachment: HIVE-19217.2.patch

> Upgrade to Hadoop 3.1.0
> ---
>
> Key: HIVE-19217
> URL: https://issues.apache.org/jira/browse/HIVE-19217
> Project: Hive
>  Issue Type: Bug
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-19217.1.patch, HIVE-19217.2.patch
>
>
> Upgrade to Hadoop 3.1.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://

2018-04-16 Thread zhuwei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhuwei updated HIVE-18871:
--
Status: Patch Available  (was: Open)

Upload a new patch file targeted to master branch.

> hive on tez execution error due to set hive.aux.jars.path to hdfs://
> 
>
> Key: HIVE-18871
> URL: https://issues.apache.org/jira/browse/HIVE-18871
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 2.2.1
> Environment: hadoop 2.6.5
> hive 2.2.1
> tez 0.8.4
>Reporter: zhuwei
>Assignee: zhuwei
>Priority: Major
> Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch, 
> HIVE-18871.3.patch
>
>
> When set the properties 
> hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar
> and hive.execution.engine=tez; execute any query will fail with below error 
> log:
> exec.Task: Failed to execute tez graph.
> java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:///
>  at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) 
> ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80)
>  ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529)
>  ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747)
>  ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524)
>  ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409)
>  ~[hadoop-common-2.6.0.jar:?]
>  at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) 
> ~[hadoop-common-2.6.0.jar:?]
>  at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) 
> ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) 
> ~[hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at 

[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://

2018-04-16 Thread zhuwei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhuwei updated HIVE-18871:
--
Attachment: HIVE-18871.3.patch

> hive on tez execution error due to set hive.aux.jars.path to hdfs://
> 
>
> Key: HIVE-18871
> URL: https://issues.apache.org/jira/browse/HIVE-18871
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 2.2.1
> Environment: hadoop 2.6.5
> hive 2.2.1
> tez 0.8.4
>Reporter: zhuwei
>Assignee: zhuwei
>Priority: Major
> Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch, 
> HIVE-18871.3.patch
>
>
> When set the properties 
> hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar
> and hive.execution.engine=tez; execute any query will fail with below error 
> log:
> exec.Task: Failed to execute tez graph.
> java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:///
>  at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) 
> ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80)
>  ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529)
>  ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747)
>  ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524)
>  ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409)
>  ~[hadoop-common-2.6.0.jar:?]
>  at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) 
> ~[hadoop-common-2.6.0.jar:?]
>  at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) 
> ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) 
> ~[hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at 

[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://

2018-04-16 Thread zhuwei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhuwei updated HIVE-18871:
--
Status: Open  (was: Patch Available)

> hive on tez execution error due to set hive.aux.jars.path to hdfs://
> 
>
> Key: HIVE-18871
> URL: https://issues.apache.org/jira/browse/HIVE-18871
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 2.2.1
> Environment: hadoop 2.6.5
> hive 2.2.1
> tez 0.8.4
>Reporter: zhuwei
>Assignee: zhuwei
>Priority: Major
> Fix For: 2.2.1
>
> Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch, 
> HIVE-18871.3.patch
>
>
> When set the properties 
> hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar
> and hive.execution.engine=tez; execute any query will fail with below error 
> log:
> exec.Task: Failed to execute tez graph.
> java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:///
>  at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) 
> ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80)
>  ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529)
>  ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747)
>  ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524)
>  ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409)
>  ~[hadoop-common-2.6.0.jar:?]
>  at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) 
> ~[hadoop-common-2.6.0.jar:?]
>  at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) 
> ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) 
> ~[hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) 

[jira] [Updated] (HIVE-18871) hive on tez execution error due to set hive.aux.jars.path to hdfs://

2018-04-16 Thread zhuwei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhuwei updated HIVE-18871:
--
Fix Version/s: (was: 2.2.1)

> hive on tez execution error due to set hive.aux.jars.path to hdfs://
> 
>
> Key: HIVE-18871
> URL: https://issues.apache.org/jira/browse/HIVE-18871
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 2.2.1
> Environment: hadoop 2.6.5
> hive 2.2.1
> tez 0.8.4
>Reporter: zhuwei
>Assignee: zhuwei
>Priority: Major
> Attachments: HIVE-18871.1.patch, HIVE-18871.2.patch, 
> HIVE-18871.3.patch
>
>
> When set the properties 
> hive.aux.jars.path=hdfs://mycluster/apps/hive/lib/guava.jar
> and hive.execution.engine=tez; execute any query will fail with below error 
> log:
> exec.Task: Failed to execute tez graph.
> java.lang.IllegalArgumentException: Wrong FS: 
> hdfs://mycluster/apps/hive/lib/guava.jar, expected: file:///
>  at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645) 
> ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80)
>  ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529)
>  ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747)
>  ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524)
>  ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409)
>  ~[hadoop-common-2.6.0.jar:?]
>  at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337) 
> ~[hadoop-common-2.6.0.jar:?]
>  at org.apache.hadoop.fs.FileSystem.copyFromLocalFile(FileSystem.java:1905) 
> ~[hadoop-common-2.6.0.jar:?]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeResource(DagUtils.java:1007)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.DagUtils.addTempResources(DagUtils.java:902)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.DagUtils.localizeTempFilesFromConf(DagUtils.java:845)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.refreshLocalResourcesFromConf(TezSessionState.java:466)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:252)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager$TezSessionPoolSession.openInternal(TezSessionPoolManager.java:622)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:206)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:283) 
> ~[hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:155) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2073) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1744) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1453) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1171) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1161) 
> [hive-exec-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:232) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:183) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:429) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:445) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:151) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:399) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:776) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:714) 
> [hive-cli-2.1.1.jar:2.1.1]
>  at 

[jira] [Updated] (HIVE-18910) Migrate to Murmur hash for shuffle and bucketing

2018-04-16 Thread Deepak Jaiswal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-18910:
--
Attachment: HIVE-18910.32.patch

> Migrate to Murmur hash for shuffle and bucketing
> 
>
> Key: HIVE-18910
> URL: https://issues.apache.org/jira/browse/HIVE-18910
> Project: Hive
>  Issue Type: Task
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-18910.1.patch, HIVE-18910.10.patch, 
> HIVE-18910.11.patch, HIVE-18910.12.patch, HIVE-18910.13.patch, 
> HIVE-18910.14.patch, HIVE-18910.15.patch, HIVE-18910.16.patch, 
> HIVE-18910.17.patch, HIVE-18910.18.patch, HIVE-18910.19.patch, 
> HIVE-18910.2.patch, HIVE-18910.20.patch, HIVE-18910.21.patch, 
> HIVE-18910.22.patch, HIVE-18910.23.patch, HIVE-18910.24.patch, 
> HIVE-18910.25.patch, HIVE-18910.26.patch, HIVE-18910.27.patch, 
> HIVE-18910.28.patch, HIVE-18910.29.patch, HIVE-18910.3.patch, 
> HIVE-18910.30.patch, HIVE-18910.31.patch, HIVE-18910.32.patch, 
> HIVE-18910.4.patch, HIVE-18910.5.patch, HIVE-18910.6.patch, 
> HIVE-18910.7.patch, HIVE-18910.8.patch, HIVE-18910.9.patch
>
>
> Hive uses JAVA hash which is not as good as murmur for better distribution 
> and efficiency in bucketing a table.
> Migrate to murmur hash but still keep backward compatibility for existing 
> users so that they dont have to reload the existing tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19220) perfLogger = SessionState.getPerfLogger() in Driver.java need to combine

2018-04-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440374#comment-16440374
 ] 

Hive QA commented on HIVE-19220:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
16s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10263/dev-support/hive-personality.sh
 |
| git revision | master / 28f7d19 |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10263/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10263/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> perfLogger = SessionState.getPerfLogger() in Driver.java need to combine
> 
>
> Key: HIVE-19220
> URL: https://issues.apache.org/jira/browse/HIVE-19220
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: shengsiwei
>Assignee: shengsiwei
>Priority: Minor
> Attachments: HIVE-19220.1.patch
>
>
> I find  duplicate code in Driver.java. We need to eliminate duplicate code to 
> improve the simplicity of the code
>  
> Before modification
> {code:java}
> PerfLogger perfLogger = null;
> if (!alreadyCompiled) {
>  // compile internal will automatically reset the perf logger
>  compileInternal(command, true);
>  // then we continue to use this perf logger
>  perfLogger = SessionState.getPerfLogger();
> } else {
>  // reuse existing perf logger.
>  perfLogger = SessionState.getPerfLogger();
>  // Since we're reusing the compiled plan, we need to update its start time 
> for current run
>  plan.setQueryStartTime(perfLogger.getStartTime(PerfLogger.DRIVER_RUN));
> }
> {code}
>  
> After  modification 
>  
> {code:java}
> PerfLogger perfLogger = SessionState.getPerfLogger();
>   if (!alreadyCompiled) {
> // compile internal will automatically reset the perf logger
> compileInternal(command, true);
>   } else {
> // Since we're reusing the compiled plan, we need to update its start 
> time for current run
> 
> plan.setQueryStartTime(perfLogger.getStartTime(PerfLogger.DRIVER_RUN));
>   }
> {code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19202) CBO failed due to NullPointerException in HiveAggregate.isBucketedInput()

2018-04-16 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-19202:

   Resolution: Fixed
Fix Version/s: 3.1.0
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks, Zhuwei!

> CBO failed due to NullPointerException in HiveAggregate.isBucketedInput()
> -
>
> Key: HIVE-19202
> URL: https://issues.apache.org/jira/browse/HIVE-19202
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.1.1
>Reporter: zhuwei
>Assignee: zhuwei
>Priority: Critical
> Fix For: 3.1.0
>
> Attachments: HIVE-19202.1.patch, HIVE-19202.2.patch
>
>
> I ran a query with join and group by with below settings, COB failed due to 
> NullPointerException in HiveAggregate.isBucketedInput()
> set hive.execution.engine=tez;
> set hive.cbo.costmodel.extended=true;
>  
> In class HiveRelMdDistribution, we implemented below functions:
> public RelDistribution distribution(HiveAggregate aggregate, RelMetadataQuery 
> mq)
> public RelDistribution distribution(HiveJoin join, RelMetadataQuery mq)
>  
> But in HiveAggregate.isBucketedInput, the argument passed to distribution is 
> "this.getInput()"
> , obviously it's not right here. The right argument needed is "this"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19214) High throughput ingest ORC format

2018-04-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440357#comment-16440357
 ] 

Hive QA commented on HIVE-19214:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12919041/HIVE-19214.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 63 failed/errored test(s), 13434 tests 
executed
*Failed tests:*
{noformat}
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestJdbcWithDBTokenStore - did not produce a TEST-*.xml file (likely timed out) 
(batchId=254)
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=95)


[jira] [Updated] (HIVE-19162) SMB : Test tez_smb_1.q stops making SMB join for a query

2018-04-16 Thread Deepak Jaiswal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-19162:
--
Fix Version/s: 3.1.0

> SMB : Test tez_smb_1.q stops making SMB join for a query
> 
>
> Key: HIVE-19162
> URL: https://issues.apache.org/jira/browse/HIVE-19162
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-19162.1.patch
>
>
> The test stopped making SMB join and instead creates a mapjoin. Likely a 
> change in stats issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19202) CBO failed due to NullPointerException in HiveAggregate.isBucketedInput()

2018-04-16 Thread zhuwei (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440349#comment-16440349
 ] 

zhuwei commented on HIVE-19202:
---

Hi  [~ashutoshc] , Thanks for comment , I am new to open source community . I 
checked the failed tests, they are not introduced by my change. what to do next 
? How can I submit a review request to hive ?

> CBO failed due to NullPointerException in HiveAggregate.isBucketedInput()
> -
>
> Key: HIVE-19202
> URL: https://issues.apache.org/jira/browse/HIVE-19202
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.1.1
>Reporter: zhuwei
>Assignee: zhuwei
>Priority: Critical
> Attachments: HIVE-19202.1.patch, HIVE-19202.2.patch
>
>
> I ran a query with join and group by with below settings, COB failed due to 
> NullPointerException in HiveAggregate.isBucketedInput()
> set hive.execution.engine=tez;
> set hive.cbo.costmodel.extended=true;
>  
> In class HiveRelMdDistribution, we implemented below functions:
> public RelDistribution distribution(HiveAggregate aggregate, RelMetadataQuery 
> mq)
> public RelDistribution distribution(HiveJoin join, RelMetadataQuery mq)
>  
> But in HiveAggregate.isBucketedInput, the argument passed to distribution is 
> "this.getInput()"
> , obviously it's not right here. The right argument needed is "this"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18845) SHOW COMAPCTIONS should show host name

2018-04-16 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440348#comment-16440348
 ] 

Eugene Koifman commented on HIVE-18845:
---

+1

> SHOW COMAPCTIONS should show host name
> --
>
> Key: HIVE-18845
> URL: https://issues.apache.org/jira/browse/HIVE-18845
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Igor Kryvenko
>Priority: Minor
> Attachments: HIVE-18845.01.patch, HIVE-18845.02.patch
>
>
> once the job starts, the WorkerId includes the hostname submitting the job
> but before that there is no way to tell which of the Metastores in HA set up 
> has picked up a given item to compact.  Should make it obvious to know which 
> log to look at.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17970) MM LOAD DATA with OVERWRITE doesn't use base_n directory concept

2018-04-16 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440336#comment-16440336
 ] 

Eugene Koifman commented on HIVE-17970:
---

I left a few nits on RB

> MM LOAD DATA with OVERWRITE doesn't use base_n directory concept
> 
>
> Key: HIVE-17970
> URL: https://issues.apache.org/jira/browse/HIVE-17970
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17970.01.patch, HIVE-17970.02.patch, 
> HIVE-17970.patch
>
>
> Judging by 
> {code:java}
> Hive.loadTable(Path loadPath, String tableName, LoadFileType loadFileType, 
> boolean isSrcLocal,
>   boolean isSkewedStoreAsSubdir, boolean isAcid, boolean 
> hasFollowingStatsTask,
>   Long txnId, int stmtId, boolean isMmTable)
> {code}
> LOAD DATA with OVERWRITE will delete all existing data then write new data 
> into the table.  This logic makes sense for non-acid tables but for Acid/MM 
> it should work like INSERT OVERWRITE statement and write new data to base_n/. 
> This way the lock manager can be used to either get an X lock for IOW and 
> thus block all readers or let it run with SemiShared and let readers continue 
> and make the system more concurrent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-17970) MM LOAD DATA with OVERWRITE doesn't use base_n directory concept

2018-04-16 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440336#comment-16440336
 ] 

Eugene Koifman edited comment on HIVE-17970 at 4/17/18 3:24 AM:


I left a few nits on RB.  otherwise LGTM


was (Author: ekoifman):
I left a few nits on RB

> MM LOAD DATA with OVERWRITE doesn't use base_n directory concept
> 
>
> Key: HIVE-17970
> URL: https://issues.apache.org/jira/browse/HIVE-17970
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17970.01.patch, HIVE-17970.02.patch, 
> HIVE-17970.patch
>
>
> Judging by 
> {code:java}
> Hive.loadTable(Path loadPath, String tableName, LoadFileType loadFileType, 
> boolean isSrcLocal,
>   boolean isSkewedStoreAsSubdir, boolean isAcid, boolean 
> hasFollowingStatsTask,
>   Long txnId, int stmtId, boolean isMmTable)
> {code}
> LOAD DATA with OVERWRITE will delete all existing data then write new data 
> into the table.  This logic makes sense for non-acid tables but for Acid/MM 
> it should work like INSERT OVERWRITE statement and write new data to base_n/. 
> This way the lock manager can be used to either get an X lock for IOW and 
> thus block all readers or let it run with SemiShared and let readers continue 
> and make the system more concurrent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-12369) Native Vector GroupBy (Part 1)

2018-04-16 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-12369:

Attachment: HIVE-12369.096.patch

> Native Vector GroupBy (Part 1)
> --
>
> Key: HIVE-12369
> URL: https://issues.apache.org/jira/browse/HIVE-12369
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-12369.01.patch, HIVE-12369.02.patch, 
> HIVE-12369.05.patch, HIVE-12369.06.patch, HIVE-12369.091.patch, 
> HIVE-12369.094.patch, HIVE-12369.095.patch, HIVE-12369.096.patch
>
>
> Implement Native Vector GroupBy using fast hash table technology developed 
> for Native Vector MapJoin, etc.
> Patch is currently limited to a single Long key with a single COUNT 
> aggregation.  Or, a single Long key and no aggregation also known as 
> duplicate reduction.
> 3 new classes introduces that stored the count in the slot table and don't 
> allocate hash elements:
> {noformat}
>   COUNT(column)  VectorGroupByHashLongKeyCountColumnOperator  
>   COUNT(key) VectorGroupByHashLongKeyCountKeyOperator
>   COUNT(*)   VectorGroupByHashLongKeyCountStarOperator   
> {noformat}
> And the duplicate reduction operator a single Long key:
> {noformat}
>   VectorGroupByHashLongKeyDuplicateReductionOperator
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-12369) Native Vector GroupBy (Part 1)

2018-04-16 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-12369:

Status: Patch Available  (was: In Progress)

> Native Vector GroupBy (Part 1)
> --
>
> Key: HIVE-12369
> URL: https://issues.apache.org/jira/browse/HIVE-12369
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-12369.01.patch, HIVE-12369.02.patch, 
> HIVE-12369.05.patch, HIVE-12369.06.patch, HIVE-12369.091.patch, 
> HIVE-12369.094.patch, HIVE-12369.095.patch, HIVE-12369.096.patch
>
>
> Implement Native Vector GroupBy using fast hash table technology developed 
> for Native Vector MapJoin, etc.
> Patch is currently limited to a single Long key with a single COUNT 
> aggregation.  Or, a single Long key and no aggregation also known as 
> duplicate reduction.
> 3 new classes introduces that stored the count in the slot table and don't 
> allocate hash elements:
> {noformat}
>   COUNT(column)  VectorGroupByHashLongKeyCountColumnOperator  
>   COUNT(key) VectorGroupByHashLongKeyCountKeyOperator
>   COUNT(*)   VectorGroupByHashLongKeyCountStarOperator   
> {noformat}
> And the duplicate reduction operator a single Long key:
> {noformat}
>   VectorGroupByHashLongKeyDuplicateReductionOperator
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-12369) Native Vector GroupBy (Part 1)

2018-04-16 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-12369:

Status: In Progress  (was: Patch Available)

> Native Vector GroupBy (Part 1)
> --
>
> Key: HIVE-12369
> URL: https://issues.apache.org/jira/browse/HIVE-12369
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-12369.01.patch, HIVE-12369.02.patch, 
> HIVE-12369.05.patch, HIVE-12369.06.patch, HIVE-12369.091.patch, 
> HIVE-12369.094.patch, HIVE-12369.095.patch
>
>
> Implement Native Vector GroupBy using fast hash table technology developed 
> for Native Vector MapJoin, etc.
> Patch is currently limited to a single Long key with a single COUNT 
> aggregation.  Or, a single Long key and no aggregation also known as 
> duplicate reduction.
> 3 new classes introduces that stored the count in the slot table and don't 
> allocate hash elements:
> {noformat}
>   COUNT(column)  VectorGroupByHashLongKeyCountColumnOperator  
>   COUNT(key) VectorGroupByHashLongKeyCountKeyOperator
>   COUNT(*)   VectorGroupByHashLongKeyCountStarOperator   
> {noformat}
> And the duplicate reduction operator a single Long key:
> {noformat}
>   VectorGroupByHashLongKeyDuplicateReductionOperator
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18739) Add support for Export from Acid table

2018-04-16 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440313#comment-16440313
 ] 

Eugene Koifman commented on HIVE-18739:
---

patch 25 addresses [~jdere]'s comments.  [~sershe], could you review please


> Add support for Export from Acid table
> --
>
> Key: HIVE-18739
> URL: https://issues.apache.org/jira/browse/HIVE-18739
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Major
> Attachments: HIVE-18739.01.patch, HIVE-18739.04.patch, 
> HIVE-18739.06.patch, HIVE-18739.08.patch, HIVE-18739.09.patch, 
> HIVE-18739.10.patch, HIVE-18739.11.patch, HIVE-18739.12.patch, 
> HIVE-18739.13.patch, HIVE-18739.14.patch, HIVE-18739.15.patch, 
> HIVE-18739.16.patch, HIVE-18739.17.patch, HIVE-18739.19.patch, 
> HIVE-18739.20.patch, HIVE-18739.21.patch, HIVE-18739.23.patch, 
> HIVE-18739.24.patch, HIVE-18739.25.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19214) High throughput ingest ORC format

2018-04-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440314#comment-16440314
 ] 

Hive QA commented on HIVE-19214:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
13s{color} | {color:red} streaming: The patch generated 4 new + 224 unchanged - 
0 fixed = 228 total (was 224) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
17s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10262/dev-support/hive-personality.sh
 |
| git revision | master / 28f7d19 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10262/yetus/diff-checkstyle-streaming.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10262/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql streaming U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10262/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> High throughput ingest ORC format
> -
>
> Key: HIVE-19214
> URL: https://issues.apache.org/jira/browse/HIVE-19214
> Project: Hive
>  Issue Type: Sub-task
>  Components: Streaming
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Gopal V
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-19214.1.patch, HIVE-19214.2.patch
>
>
> Create delta files with all ORC overhead disabled (no index, no compression, 
> no dictionary). Compactor will recreate the orc files with index, compression 
> and dictionary encoding.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19202) CBO failed due to NullPointerException in HiveAggregate.isBucketedInput()

2018-04-16 Thread zhuwei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhuwei updated HIVE-19202:
--
Fix Version/s: (was: 2.1.1)

> CBO failed due to NullPointerException in HiveAggregate.isBucketedInput()
> -
>
> Key: HIVE-19202
> URL: https://issues.apache.org/jira/browse/HIVE-19202
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.1.1
>Reporter: zhuwei
>Assignee: zhuwei
>Priority: Critical
> Attachments: HIVE-19202.1.patch, HIVE-19202.2.patch
>
>
> I ran a query with join and group by with below settings, COB failed due to 
> NullPointerException in HiveAggregate.isBucketedInput()
> set hive.execution.engine=tez;
> set hive.cbo.costmodel.extended=true;
>  
> In class HiveRelMdDistribution, we implemented below functions:
> public RelDistribution distribution(HiveAggregate aggregate, RelMetadataQuery 
> mq)
> public RelDistribution distribution(HiveJoin join, RelMetadataQuery mq)
>  
> But in HiveAggregate.isBucketedInput, the argument passed to distribution is 
> "this.getInput()"
> , obviously it's not right here. The right argument needed is "this"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19217) Upgrade to Hadoop 3.1.0

2018-04-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440290#comment-16440290
 ] 

Hive QA commented on HIVE-19217:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12919149/HIVE-19217.1.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10260/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10260/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10260/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-04-17 02:34:25.259
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-10260/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-04-17 02:34:25.262
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 28f7d19 HIVE-19154: Poll notification events to invalidate the 
results cache (Jason Dere, reviewed by GopalV)
+ git clean -f -d
Removing ${project.basedir}/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 28f7d19 HIVE-19154: Poll notification events to invalidate the 
results cache (Jason Dere, reviewed by GopalV)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-04-17 02:34:30.371
+ rm -rf ../yetus_PreCommit-HIVE-Build-10260
+ mkdir ../yetus_PreCommit-HIVE-Build-10260
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-10260
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-10260/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: a/pom.xml: does not exist in index
Going to apply patch with: git apply -p1
+ [[ maven == \m\a\v\e\n ]]
+ rm -rf /data/hiveptest/working/maven/org/apache/hive
+ mvn -B clean install -DskipTests -T 4 -q 
-Dmaven.repo.local=/data/hiveptest/working/maven
[ERROR] Failed to execute goal on project hive-hcatalog: Could not resolve 
dependencies for project 
org.apache.hive.hcatalog:hive-hcatalog:pom:3.1.0-SNAPSHOT: Failed to collect 
dependencies for [org.mockito:mockito-all:jar:1.10.19 (test), 
org.apache.hadoop:hadoop-common:jar:3.1.0 (test), 
org.apache.hadoop:hadoop-mapreduce-client-core:jar:3.1.0 (test), 
org.apache.pig:pig:jar:h2:0.16.0 (test), org.slf4j:slf4j-api:jar:1.7.10 
(compile)]: Failed to read artifact descriptor for 
org.apache.hadoop:hadoop-common:jar:3.1.0: Could not find artifact 
org.apache.hadoop:hadoop-project-dist:pom:3.1.0 in datanucleus 
(http://www.datanucleus.org/downloads/maven2) -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hive-hcatalog
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12919149 - PreCommit-HIVE-Build

> Upgrade to Hadoop 3.1.0
> ---
>
> Key: HIVE-19217
> URL: https://issues.apache.org/jira/browse/HIVE-19217
> Project: Hive
>  Issue Type: Bug
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-19217.1.patch
>
>
> Upgrade to Hadoop 3.1.0



--
This message was sent by 

[jira] [Commented] (HIVE-19133) HS2 WebUI phase-wise performance metrics not showing correctly

2018-04-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440285#comment-16440285
 ] 

Hive QA commented on HIVE-19133:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12919147/HIVE-19133.4.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 142 failed/errored test(s), 14236 tests 
executed
*Failed tests:*
{noformat}
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=217)
TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_smb] (batchId=92)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] 
(batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[results_cache_invalidation2]
 (batchId=39)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[tez_join_hash] 
(batchId=54)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation2]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[infer_bucket_sort_dyn_part]
 (batchId=93)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[infer_bucket_sort_map_operators]
 (batchId=93)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[infer_bucket_sort_num_buckets]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.org.apache.hadoop.hive.cli.TestNegativeCliDriver
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.org.apache.hadoop.hive.cli.TestNegativeCliDriver
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_notnull_constraint_violation]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[default_constraint_invalid_default_value_type]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into_acid_notnull]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into_notnull_constraint]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_multi_into_notnull]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_overwrite_notnull_constraint]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insertsel_fail] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[joinneg] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[ptf_negative_JoinWithAmbigousAlias]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[smb_bucketmapjoin]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[smb_mapjoin_14] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[sortmerge_mapjoin_mismatch_1]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_1]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_2]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_publisher_error_1]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subq_insert] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_corr_in_agg]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_in_implicit_gby]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_notin_implicit_gby]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[temp_table_create_like_partitions]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[temp_table_partitions]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[timestamp_literal]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[truncate_bucketed_column]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[truncate_column_list_bucketing]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[truncate_column_seqfile]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[truncate_nonexistant_column]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[truncate_partition_column2]

[jira] [Commented] (HIVE-19160) Insert data into decimal column fails with Null Pointer Exception

2018-04-16 Thread Janaki Lahorani (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440262#comment-16440262
 ] 

Janaki Lahorani commented on HIVE-19160:


Thanks [~ashutoshc] for catching that.  I will fix it.

> Insert data into decimal column fails with Null Pointer Exception
> -
>
> Key: HIVE-19160
> URL: https://issues.apache.org/jira/browse/HIVE-19160
> Project: Hive
>  Issue Type: Bug
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-19160.1.patch, HIVE-19160.2.patch, 
> HIVE-19160.3.patch
>
>
> drop table if exists testDecimal;
> create table testDecimal
> (cIdTINYINT,
>  cBigIntDECIMAL,
>  cInt   DECIMAL,
>  cSmallInt  DECIMAL,
>  cTinyint   DECIMAL);
> insert into testDecimal values
> (1,
>  1234567890123456789,
>  1234567890,
>  12345,
>  123);
> insert into testDecimal values
> (2,
>  1,
>  2,
>  3,
>  4);
> The second insert fails with null pointer exception.
> 2018-04-10T15:23:23,080 ERROR [5dba40ef-be49-4187-8a72-afbb46c41ecc main] 
> metastore.RetryingHMSHandler: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.metastore.api.Decimal.compareTo(Decimal.java:318)
>   at 
> org.apache.hadoop.hive.metastore.columnstats.merge.DecimalColumnStatsMerger.merge(DecimalColumnStatsMerger.java:35)
>   at 
> org.apache.hadoop.hive.metastore.utils.MetaStoreUtils.mergeColStats(MetaStoreUtils.java:1040)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.set_aggr_stats_for(HiveMetaStore.java:7166)
>   at sun.reflect.GeneratedMethodAccessor43.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
>   at com.sun.proxy.$Proxy40.set_aggr_stats_for(Unknown Source)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.setPartitionColumnStatistics(HiveMetaStoreClient.java:1870)
>   at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.setPartitionColumnStatistics(SessionHiveMetaStoreClient.java:395)
>   at sun.reflect.GeneratedMethodAccessor42.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
>   at com.sun.proxy.$Proxy41.setPartitionColumnStatistics(Unknown Source)
>   at 
> org.apache.hadoop.hive.ql.metadata.Hive.setPartitionColumnStatistics(Hive.java:4171)
>   at 
> org.apache.hadoop.hive.ql.stats.ColStatsProcessor.persistColumnStats(ColStatsProcessor.java:179)
>   at 
> org.apache.hadoop.hive.ql.stats.ColStatsProcessor.process(ColStatsProcessor.java:83)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-19160) Insert data into decimal column fails with Null Pointer Exception

2018-04-16 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440243#comment-16440243
 ] 

Ashutosh Chauhan edited comment on HIVE-19160 at 4/17/18 1:49 AM:
--

{code}
 Decimal lowValue = aggregateData.getLowValue() != null &&
  aggregateData.getLowValue().compareTo(newData.getLowValue()) 
> 0 ?
  aggregateData.getLowValue() : newData.getLowValue();
{code}

should be 
{code}
 Decimal lowValue = aggregateData.getLowValue() != null &&
  aggregateData.getLowValue().compareTo(newData.getLowValue()) 
< 0 ?
   aggregateData.getLowValue() : newData.getLowValue();
{code}

Since we get a new value which is lower, that is what we want to use and 
persist in stats.


was (Author: ashutoshc):
{code}
 Decimal lowValue = aggregateData.getLowValue() != null &&
  aggregateData.getLowValue().compareTo(newData.getLowValue()) 
> 0 ?
  aggregateData.getLowValue() : newData.getLowValue();
{code}

should be 
{code}
 Decimal lowValue = aggregateData.getLowValue() != null &&
  aggregateData.getLowValue().compareTo(newData.getLowValue()) 
> 0 ?
  newData.getLowValue() :aggregateData.getLowValue();
{code}

Since we get a new value which is lower, that is what we want to use and 
persist in stats.

> Insert data into decimal column fails with Null Pointer Exception
> -
>
> Key: HIVE-19160
> URL: https://issues.apache.org/jira/browse/HIVE-19160
> Project: Hive
>  Issue Type: Bug
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-19160.1.patch, HIVE-19160.2.patch, 
> HIVE-19160.3.patch
>
>
> drop table if exists testDecimal;
> create table testDecimal
> (cIdTINYINT,
>  cBigIntDECIMAL,
>  cInt   DECIMAL,
>  cSmallInt  DECIMAL,
>  cTinyint   DECIMAL);
> insert into testDecimal values
> (1,
>  1234567890123456789,
>  1234567890,
>  12345,
>  123);
> insert into testDecimal values
> (2,
>  1,
>  2,
>  3,
>  4);
> The second insert fails with null pointer exception.
> 2018-04-10T15:23:23,080 ERROR [5dba40ef-be49-4187-8a72-afbb46c41ecc main] 
> metastore.RetryingHMSHandler: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.metastore.api.Decimal.compareTo(Decimal.java:318)
>   at 
> org.apache.hadoop.hive.metastore.columnstats.merge.DecimalColumnStatsMerger.merge(DecimalColumnStatsMerger.java:35)
>   at 
> org.apache.hadoop.hive.metastore.utils.MetaStoreUtils.mergeColStats(MetaStoreUtils.java:1040)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.set_aggr_stats_for(HiveMetaStore.java:7166)
>   at sun.reflect.GeneratedMethodAccessor43.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
>   at com.sun.proxy.$Proxy40.set_aggr_stats_for(Unknown Source)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.setPartitionColumnStatistics(HiveMetaStoreClient.java:1870)
>   at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.setPartitionColumnStatistics(SessionHiveMetaStoreClient.java:395)
>   at sun.reflect.GeneratedMethodAccessor42.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
>   at com.sun.proxy.$Proxy41.setPartitionColumnStatistics(Unknown Source)
>   at 
> org.apache.hadoop.hive.ql.metadata.Hive.setPartitionColumnStatistics(Hive.java:4171)
>   at 
> org.apache.hadoop.hive.ql.stats.ColStatsProcessor.persistColumnStats(ColStatsProcessor.java:179)
>   at 
> org.apache.hadoop.hive.ql.stats.ColStatsProcessor.process(ColStatsProcessor.java:83)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-16480) ORC file with empty array and array fails to read

2018-04-16 Thread Jaehwa Jung (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440247#comment-16440247
 ] 

Jaehwa Jung commented on HIVE-16480:


[~owen.omalley]

 ORC-285 already included in ORC 1.4.3 version. Could you start to review this 
patch?

> ORC file with empty array and array fails to read
> 
>
> Key: HIVE-16480
> URL: https://issues.apache.org/jira/browse/HIVE-16480
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.1, 2.2.0
>Reporter: David Capwell
>Assignee: Owen O'Malley
>Priority: Major
>  Labels: pull-request-available
>
> We have a schema that has a array in it.  We were unable to read this 
> file and digging into ORC it seems that the issue is when the array is empty.
> Here is the stack trace
> {code:title=EmptyList.log|borderStyle=solid}
> ERROR 2017-04-19 09:29:17,075 [main] [EmptyList] [line 56] Failed to work 
> with type float 
> java.io.IOException: Error reading file: 
> /var/folders/t8/t5x1031d7mn17f6xpwnkkv_4gn/T/1492619355819-0/file-float.orc
>   at 
> org.apache.orc.impl.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1052) 
> ~[hive-orc-2.1.1.jar:2.1.1]
>   at 
> org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.nextBatch(RecordReaderImpl.java:135)
>  ~[hive-exec-2.1.1.jar:2.1.1]
>   at EmptyList.emptyList(EmptyList.java:49) ~[test-classes/:na]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_121]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_121]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_121]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_121]
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  [junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  [junit-4.12.jar:4.12]
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  [junit-4.12.jar:4.12]
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  [junit-4.12.jar:4.12]
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) 
> [junit-4.12.jar:4.12]
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  [junit-4.12.jar:4.12]
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  [junit-4.12.jar:4.12]
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) 
> [junit-4.12.jar:4.12]
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) 
> [junit-4.12.jar:4.12]
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) 
> [junit-4.12.jar:4.12]
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) 
> [junit-4.12.jar:4.12]
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) 
> [junit-4.12.jar:4.12]
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363) 
> [junit-4.12.jar:4.12]
>   at org.junit.runner.JUnitCore.run(JUnitCore.java:137) [junit-4.12.jar:4.12]
>   at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>  [junit-rt.jar:na]
>   at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:51)
>  [junit-rt.jar:na]
>   at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:237)
>  [junit-rt.jar:na]
>   at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) 
> [junit-rt.jar:na]
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_121]
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_121]
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_121]
>   at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_121]
>   at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147) 
> [idea_rt.jar:na]
> Caused by: java.io.EOFException: Read past EOF for compressed stream Stream 
> for column 1 kind DATA position: 0 length: 0 range: 0 offset: 0 limit: 0
>   at 
> org.apache.orc.impl.SerializationUtils.readFully(SerializationUtils.java:118) 
> ~[hive-orc-2.1.1.jar:2.1.1]
>   at 
> org.apache.orc.impl.SerializationUtils.readFloat(SerializationUtils.java:78) 
> ~[hive-orc-2.1.1.jar:2.1.1]
>   at 
> org.apache.orc.impl.TreeReaderFactory$FloatTreeReader.nextVector(TreeReaderFactory.java:619)
>  ~[hive-orc-2.1.1.jar:2.1.1]
>   at 
> org.apache.orc.impl.TreeReaderFactory$ListTreeReader.nextVector(TreeReaderFactory.java:1902)
>  

[jira] [Commented] (HIVE-19160) Insert data into decimal column fails with Null Pointer Exception

2018-04-16 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440243#comment-16440243
 ] 

Ashutosh Chauhan commented on HIVE-19160:
-

{code}
 Decimal lowValue = aggregateData.getLowValue() != null &&
  aggregateData.getLowValue().compareTo(newData.getLowValue()) 
> 0 ?
  aggregateData.getLowValue() : newData.getLowValue();
{code}

should be 
{code}
 Decimal lowValue = aggregateData.getLowValue() != null &&
  aggregateData.getLowValue().compareTo(newData.getLowValue()) 
> 0 ?
  newData.getLowValue() :aggregateData.getLowValue();
{code}

Since we get a new value which is lower, that is what we want to use and 
persist in stats.

> Insert data into decimal column fails with Null Pointer Exception
> -
>
> Key: HIVE-19160
> URL: https://issues.apache.org/jira/browse/HIVE-19160
> Project: Hive
>  Issue Type: Bug
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-19160.1.patch, HIVE-19160.2.patch, 
> HIVE-19160.3.patch
>
>
> drop table if exists testDecimal;
> create table testDecimal
> (cIdTINYINT,
>  cBigIntDECIMAL,
>  cInt   DECIMAL,
>  cSmallInt  DECIMAL,
>  cTinyint   DECIMAL);
> insert into testDecimal values
> (1,
>  1234567890123456789,
>  1234567890,
>  12345,
>  123);
> insert into testDecimal values
> (2,
>  1,
>  2,
>  3,
>  4);
> The second insert fails with null pointer exception.
> 2018-04-10T15:23:23,080 ERROR [5dba40ef-be49-4187-8a72-afbb46c41ecc main] 
> metastore.RetryingHMSHandler: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.metastore.api.Decimal.compareTo(Decimal.java:318)
>   at 
> org.apache.hadoop.hive.metastore.columnstats.merge.DecimalColumnStatsMerger.merge(DecimalColumnStatsMerger.java:35)
>   at 
> org.apache.hadoop.hive.metastore.utils.MetaStoreUtils.mergeColStats(MetaStoreUtils.java:1040)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.set_aggr_stats_for(HiveMetaStore.java:7166)
>   at sun.reflect.GeneratedMethodAccessor43.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
>   at com.sun.proxy.$Proxy40.set_aggr_stats_for(Unknown Source)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.setPartitionColumnStatistics(HiveMetaStoreClient.java:1870)
>   at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.setPartitionColumnStatistics(SessionHiveMetaStoreClient.java:395)
>   at sun.reflect.GeneratedMethodAccessor42.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
>   at com.sun.proxy.$Proxy41.setPartitionColumnStatistics(Unknown Source)
>   at 
> org.apache.hadoop.hive.ql.metadata.Hive.setPartitionColumnStatistics(Hive.java:4171)
>   at 
> org.apache.hadoop.hive.ql.stats.ColStatsProcessor.persistColumnStats(ColStatsProcessor.java:179)
>   at 
> org.apache.hadoop.hive.ql.stats.ColStatsProcessor.process(ColStatsProcessor.java:83)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19001) ALTER TABLE ADD CONSTRAINT support for CHECK constraint

2018-04-16 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19001:
---
Attachment: HIVE-19001.4.patch

> ALTER TABLE ADD CONSTRAINT support for CHECK constraint
> ---
>
> Key: HIVE-19001
> URL: https://issues.apache.org/jira/browse/HIVE-19001
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19001.1.patch, HIVE-19001.2.patch, 
> HIVE-19001.3.patch, HIVE-19001.4.patch
>
>
> ALTER TABLE ADD CONSTRAINT should be able to add CHECK constraint (table 
> level)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19001) ALTER TABLE ADD CONSTRAINT support for CHECK constraint

2018-04-16 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19001:
---
Status: Patch Available  (was: Open)

Latest patch(4) added support for table level constraint for CHECK during 
CREATE TABLE

> ALTER TABLE ADD CONSTRAINT support for CHECK constraint
> ---
>
> Key: HIVE-19001
> URL: https://issues.apache.org/jira/browse/HIVE-19001
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19001.1.patch, HIVE-19001.2.patch, 
> HIVE-19001.3.patch, HIVE-19001.4.patch
>
>
> ALTER TABLE ADD CONSTRAINT should be able to add CHECK constraint (table 
> level)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19001) ALTER TABLE ADD CONSTRAINT support for CHECK constraint

2018-04-16 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19001:
---
Status: Open  (was: Patch Available)

> ALTER TABLE ADD CONSTRAINT support for CHECK constraint
> ---
>
> Key: HIVE-19001
> URL: https://issues.apache.org/jira/browse/HIVE-19001
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19001.1.patch, HIVE-19001.2.patch, 
> HIVE-19001.3.patch
>
>
> ALTER TABLE ADD CONSTRAINT should be able to add CHECK constraint (table 
> level)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19133) HS2 WebUI phase-wise performance metrics not showing correctly

2018-04-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440239#comment-16440239
 ] 

Hive QA commented on HIVE-19133:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
6s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10259/dev-support/hive-personality.sh
 |
| git revision | master / 28f7d19 |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10259/yetus/patch-asflicense-problems.txt
 |
| modules | C: common itests/hive-unit ql service U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10259/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> HS2 WebUI phase-wise performance metrics not showing correctly
> --
>
> Key: HIVE-19133
> URL: https://issues.apache.org/jira/browse/HIVE-19133
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, Web UI
>Reporter: Bharathkrishna Guruvayoor Murali
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-19133.1.patch, HIVE-19133.2.patch, 
> HIVE-19133.3.patch, HIVE-19133.4.patch, WebUI-compile time query metrics.png
>
>
> The query specific WebUI metrics (go to drilldown -> performance logging) are 
> not showing up in the correct phase and are often mixed up.
> Attaching screenshot.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19202) CBO failed due to NullPointerException in HiveAggregate.isBucketedInput()

2018-04-16 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440238#comment-16440238
 ] 

Ashutosh Chauhan commented on HIVE-19202:
-

+1

> CBO failed due to NullPointerException in HiveAggregate.isBucketedInput()
> -
>
> Key: HIVE-19202
> URL: https://issues.apache.org/jira/browse/HIVE-19202
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.1.1
>Reporter: zhuwei
>Assignee: zhuwei
>Priority: Critical
> Fix For: 2.1.1
>
> Attachments: HIVE-19202.1.patch, HIVE-19202.2.patch
>
>
> I ran a query with join and group by with below settings, COB failed due to 
> NullPointerException in HiveAggregate.isBucketedInput()
> set hive.execution.engine=tez;
> set hive.cbo.costmodel.extended=true;
>  
> In class HiveRelMdDistribution, we implemented below functions:
> public RelDistribution distribution(HiveAggregate aggregate, RelMetadataQuery 
> mq)
> public RelDistribution distribution(HiveJoin join, RelMetadataQuery mq)
>  
> But in HiveAggregate.isBucketedInput, the argument passed to distribution is 
> "this.getInput()"
> , obviously it's not right here. The right argument needed is "this"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-19139) Hive.getValidPartitionsInPath() issue

2018-04-16 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin resolved HIVE-19139.
-
Resolution: Not A Problem

This should be fixed as part of actually adding multi statement txns :)

> Hive.getValidPartitionsInPath() issue
> -
>
> Key: HIVE-19139
> URL: https://issues.apache.org/jira/browse/HIVE-19139
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Priority: Major
>
> this method looks like this.  This will not work for multi-stmt txns since 
> each statement uses the same writeId but different statementId.
> {noformat}
> // The non-MM path only finds new partitions, as it is looking at the temp 
> path.
> // To produce the same effect, we will find all the partitions affected by 
> this txn ID.
> // Note: we ignore the statement ID here, because it's currently irrelevant 
> for MoveTask
> // where this is used; we always want to load everything; also the only case 
> where
> // we have multiple statements anyway is union.
> Utilities.FILE_OP_LOGGER.trace(
>  "Looking for dynamic partitions in {} ({} levels)", loadPath, numDP);
> Path[] leafStatus = Utilities.getMmDirectoryCandidates(
>  fs, loadPath, numDP, numLB, null, writeId, -1, conf, isInsertOverwrite); 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17970) MM LOAD DATA with OVERWRITE doesn't use base_n directory concept

2018-04-16 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440226#comment-16440226
 ] 

Jason Dere commented on HIVE-17970:
---

Looks ok to me, +1.
May want to check with Eugene as well.

> MM LOAD DATA with OVERWRITE doesn't use base_n directory concept
> 
>
> Key: HIVE-17970
> URL: https://issues.apache.org/jira/browse/HIVE-17970
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17970.01.patch, HIVE-17970.02.patch, 
> HIVE-17970.patch
>
>
> Judging by 
> {code:java}
> Hive.loadTable(Path loadPath, String tableName, LoadFileType loadFileType, 
> boolean isSrcLocal,
>   boolean isSkewedStoreAsSubdir, boolean isAcid, boolean 
> hasFollowingStatsTask,
>   Long txnId, int stmtId, boolean isMmTable)
> {code}
> LOAD DATA with OVERWRITE will delete all existing data then write new data 
> into the table.  This logic makes sense for non-acid tables but for Acid/MM 
> it should work like INSERT OVERWRITE statement and write new data to base_n/. 
> This way the lock manager can be used to either get an X lock for IOW and 
> thus block all readers or let it run with SemiShared and let readers continue 
> and make the system more concurrent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18469) HS2UI: Introduce separate option to show query on web ui

2018-04-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440220#comment-16440220
 ] 

Hive QA commented on HIVE-18469:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12919144/HIVE-18469.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 21 failed/errored test(s), 13438 tests 
executed
*Failed tests:*
{noformat}
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=95)


[jira] [Assigned] (HIVE-17657) export/import for MM tables is broken

2018-04-16 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HIVE-17657:
---

Assignee: Sergey Shelukhin

> export/import for MM tables is broken
> -
>
> Key: HIVE-17657
> URL: https://issues.apache.org/jira/browse/HIVE-17657
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
>
> there is mm_exim.q but it's not clear from the tests what file structure it 
> creates 
> On import the txnids in the directory names would have to be remapped if 
> importing to a different cluster.  Perhaps export can be smart and export 
> highest base_x and accretive deltas (minus aborted ones).  Then import can 
> ...?  It would have to remap txn ids from the archive to new txn ids.  This 
> would then mean that import is made up of several transactions rather than 1 
> atomic op.  (all locks must belong to a transaction)
> One possibility is to open a new txn for each dir in the archive (where 
> start/end txn of file name is the same) and commit all of them at once (need 
> new TMgr API for that).  This assumes using a shared lock (if any!) and thus 
> allows other inserts (not related to import) to occur.
> What if you have delta_6_9, such as a result of concatenate?  If we stipulate 
> that this must mean that there is no delta_6_6 or any other "obsolete" delta 
> in the archive we can map it to a new single txn delta_x_x.
> Add read_only mode for tables (useful in general, may be needed for upgrade 
> etc) and use that to make the above atomic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18827) useless dynamic value exceptions strike back

2018-04-16 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-18827:
--
Attachment: HIVE-18827.2.patch

> useless dynamic value exceptions strike back
> 
>
> Key: HIVE-18827
> URL: https://issues.apache.org/jira/browse/HIVE-18827
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-18827.1.patch, HIVE-18827.2.patch
>
>
> Looking at ~master, I can see tons of exceptions like this in LLAP log:
> {noformat}
> 2018-02-27T14:07:51,989  WARN [IO-Elevator-Thread-12 
> (1515669035295_0909_1_08_000117_0)] impl.RecordReaderImpl: 
> NoDynamicValuesException when evaluating predicate. Skipping ORC PPD. Stats: 
> numberOfValues: 9750
> intStatistics {
>   minimum: 11335
>   maximum: 560
>   sum: 27648854404
> }
> hasNull: true
>  Predicate: (BETWEEN ss_addr_sk 
> DynamicValue(RS_27_customer_address_ca_address_sk_min) 
> DynamicValue(RS_27_customer_address_ca_address_sk_max))
> org.apache.hadoop.hive.ql.plan.DynamicValue$NoDynamicValuesException: Value 
> does not exist in registry: RS_27_customer_address_ca_address_sk_min
>   at 
> org.apache.hadoop.hive.ql.exec.tez.DynamicValueRegistryTez.getValue(DynamicValueRegistryTez.java:77)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.plan.DynamicValue.getValue(DynamicValue.java:137) 
> ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.plan.DynamicValue.getJavaValue(DynamicValue.java:97)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.plan.DynamicValue.getLiteral(DynamicValue.java:93) 
> ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.sarg.SearchArgumentImpl$PredicateLeafImpl.getLiteralList(SearchArgumentImpl.java:120)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.orc.impl.RecordReaderImpl.evaluatePredicateMinMax(RecordReaderImpl.java:553)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.orc.impl.RecordReaderImpl.evaluatePredicateRange(RecordReaderImpl.java:463)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.orc.impl.RecordReaderImpl.evaluatePredicateProto(RecordReaderImpl.java:423)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.orc.impl.RecordReaderImpl$SargApplier.pickRowGroups(RecordReaderImpl.java:848)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.determineRgsToRead(OrcEncodedDataReader.java:835)
>  ~[hive-llap-server-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.performDataRead(OrcEncodedDataReader.java:335)
>  ~[hive-llap-server-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:276)
>  ~[hive-llap-server-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader$4.run(OrcEncodedDataReader.java:273)
>  ~[hive-llap-server-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at java.security.AccessController.doPrivileged(Native Method) 
> ~[?:1.8.0_112]
>   at javax.security.auth.Subject.doAs(Subject.java:422) ~[?:1.8.0_112]
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965)
>  ~[hadoop-common-3.0.0.3.0.0.0-776.jar:?]
>   at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:273)
>  ~[hive-llap-server-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.llap.io.encoded.OrcEncodedDataReader.callInternal(OrcEncodedDataReader.java:110)
>  ~[hive-llap-server-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) 
> ~[tez-common-0.9.2-SNAPSHOT.jar:0.9.2-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.llap.daemon.impl.StatsRecordingThreadPool$WrappedCallable.call(StatsRecordingThreadPool.java:110)
>  ~[hive-llap-server-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
> ~[?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  ~[?:1.8.0_112]
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  ~[?:1.8.0_112]
>   at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19215) JavaUtils.AnyIdDirFilter ignores base_n directories

2018-04-16 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440192#comment-16440192
 ] 

Prasanth Jayachandran commented on HIVE-19215:
--

don't we have a regex that parses delta__? if we can parse minid 
and maxid using regex then filter can return true right?

> JavaUtils.AnyIdDirFilter ignores base_n directories
> ---
>
> Key: HIVE-19215
> URL: https://issues.apache.org/jira/browse/HIVE-19215
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19215.patch
>
>
> cc [~sershe], [~steveyeom2017]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19154) Poll notification events to invalidate the results cache

2018-04-16 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-19154:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed to master/branch-3

> Poll notification events to invalidate the results cache
> 
>
> Key: HIVE-19154
> URL: https://issues.apache.org/jira/browse/HIVE-19154
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19154.1.patch, HIVE-19154.2.patch, 
> HIVE-19154.3.patch
>
>
> Related to the work for HIVE-18609. HIVE-18609 will only invalidate entries 
> in the cache if that query looked up again, which could potentially leave a 
> lot of undetected invalid entries in the cache taking up space which could 
> cause other entries to be evicted. To remove these entries in a more timely 
> fashion, have a background thread to periodically check the notification 
> events for updates to the tables used in the results cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19224) incorrect token handling for LLAP plugin endpoint - part 2

2018-04-16 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440189#comment-16440189
 ] 

Prasanth Jayachandran commented on HIVE-19224:
--

lgtm, +1

> incorrect token handling for LLAP plugin endpoint - part 2
> --
>
> Key: HIVE-19224
> URL: https://issues.apache.org/jira/browse/HIVE-19224
> Project: Hive
>  Issue Type: Bug
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19224.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.exec.tez.LlapPluginEndpointClientImpl.getTokenUser(LlapPluginEndpointClientImpl.java:77)
>  ~[hive-exec-3.0.0.3.0.0.0-1145.jar:3.0.0.3.0.0.0-1145]
> at 
> org.apache.hadoop.hive.llap.AsyncPbRpcProxy.createProxy(AsyncPbRpcProxy.java:447)
>  ~[hive-exec-3.0.0.3.0.0.0-1145.jar:3.0.0.3.0.0.0-1145]
> at 
> org.apache.hadoop.hive.llap.AsyncPbRpcProxy.access$100(AsyncPbRpcProxy.java:66)
>  ~[hive-exec-3.0.0.3.0.0.0-1145.jar:3.0.0.3.0.0.0-1145]
> at 
> org.apache.hadoop.hive.llap.AsyncPbRpcProxy$3.call(AsyncPbRpcProxy.java:429) 
> ~[hive-exec-3.0.0.3.0.0.0-1145.jar:3.0.0.3.0.0.0-1145]
> at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4793)
>  ~[guava-19.0.jar:?]
> at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3542)
>  ~[guava-19.0.jar:?]
> at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2323) 
> ~[guava-19.0.jar:?]
> at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2286)
>  ~[guava-19.0.jar:?]
> at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2201) 
> ~[guava-19.0.jar:?]
> ... 12 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19215) JavaUtils.AnyIdDirFilter ignores base_n directories

2018-04-16 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19215:

Status: Patch Available  (was: Open)

[~prasanth_j] can you review this? Small patch

> JavaUtils.AnyIdDirFilter ignores base_n directories
> ---
>
> Key: HIVE-19215
> URL: https://issues.apache.org/jira/browse/HIVE-19215
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19215.patch
>
>
> cc [~sershe], [~steveyeom2017]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19215) JavaUtils.AnyIdDirFilter ignores base_n directories

2018-04-16 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19215:

Attachment: HIVE-19215.patch

> JavaUtils.AnyIdDirFilter ignores base_n directories
> ---
>
> Key: HIVE-19215
> URL: https://issues.apache.org/jira/browse/HIVE-19215
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HIVE-19215.patch
>
>
> cc [~sershe], [~steveyeom2017]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18469) HS2UI: Introduce separate option to show query on web ui

2018-04-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440183#comment-16440183
 ] 

Hive QA commented on HIVE-18469:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
7s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} The patch common passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} itests/hive-unit: The patch generated 0 new + 0 
unchanged - 15 fixed = 0 total (was 15) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} ql: The patch generated 0 new + 133 unchanged - 1 
fixed = 133 total (was 134) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} The patch service passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10258/dev-support/hive-personality.sh
 |
| git revision | master / 6afa544 |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10258/yetus/patch-asflicense-problems.txt
 |
| modules | C: common itests/hive-unit ql service U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10258/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> HS2UI: Introduce separate option to show query on web ui
> 
>
> Key: HIVE-18469
> URL: https://issues.apache.org/jira/browse/HIVE-18469
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-18469.1.patch, HIVE-18469.2.patch
>
>
> currently {{ConfVars.HIVE_LOG_EXPLAIN_OUTPUT}} enables 2 features:
> * log the query to the console (even thru beeline)
> * shows the query on the web ui
> I've enabled it...and ever since then my beeline is always flooded with an 
> {{explain extended}} output...which is very verbose; even for simple queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19215) JavaUtils.AnyIdDirFilter ignores base_n directories

2018-04-16 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HIVE-19215:
---

Assignee: Sergey Shelukhin

> JavaUtils.AnyIdDirFilter ignores base_n directories
> ---
>
> Key: HIVE-19215
> URL: https://issues.apache.org/jira/browse/HIVE-19215
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>
> cc [~sershe], [~steveyeom2017]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19224) incorrect token handling for LLAP plugin endpoint - part 2

2018-04-16 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19224:

Description: 
{noformat}
Caused by: java.lang.NullPointerException
at 
org.apache.hadoop.hive.ql.exec.tez.LlapPluginEndpointClientImpl.getTokenUser(LlapPluginEndpointClientImpl.java:77)
 ~[hive-exec-3.0.0.3.0.0.0-1145.jar:3.0.0.3.0.0.0-1145]
at 
org.apache.hadoop.hive.llap.AsyncPbRpcProxy.createProxy(AsyncPbRpcProxy.java:447)
 ~[hive-exec-3.0.0.3.0.0.0-1145.jar:3.0.0.3.0.0.0-1145]
at 
org.apache.hadoop.hive.llap.AsyncPbRpcProxy.access$100(AsyncPbRpcProxy.java:66) 
~[hive-exec-3.0.0.3.0.0.0-1145.jar:3.0.0.3.0.0.0-1145]
at 
org.apache.hadoop.hive.llap.AsyncPbRpcProxy$3.call(AsyncPbRpcProxy.java:429) 
~[hive-exec-3.0.0.3.0.0.0-1145.jar:3.0.0.3.0.0.0-1145]
at 
com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4793)
 ~[guava-19.0.jar:?]
at 
com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3542)
 ~[guava-19.0.jar:?]
at 
com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2323) 
~[guava-19.0.jar:?]
at 
com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2286)
 ~[guava-19.0.jar:?]
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2201) 
~[guava-19.0.jar:?]
... 12 more
{noformat}

  was:
{noformat}
java.lang.IllegalArgumentException: Null user
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2207) 
~[guava-19.0.jar:?]
at com.google.common.cache.LocalCache.get(LocalCache.java:3953) 
~[guava-19.0.jar:?]
at 
com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4790) 
~[guava-19.0.jar:?]
at 
org.apache.hadoop.hive.llap.AsyncPbRpcProxy.getProxy(AsyncPbRpcProxy.java:425) 
~[hive-exec-3.0.0.3.0.0.0-1101.jar:3.0.0.3.0.0.0-1101]
at 
org.apache.hadoop.hive.ql.exec.tez.LlapPluginEndpointClientImpl.access$000(LlapPluginEndpointClientImpl.java:45)
 ~[hive-exec-3.0.0.3.0.0.0-1101.jar:3.0.0.3.0.0.0-1101]
at 
org.apache.hadoop.hive.ql.exec.tez.LlapPluginEndpointClientImpl$SendUpdateQueryCallable.call(LlapPluginEndpointClientImpl.java:116)
 ~[hive-exec-3.0.0.3.0.0.0-1101.jar:3.0.0.3.0.0.0-1101]
at 
org.apache.hadoop.hive.ql.exec.tez.LlapPluginEndpointClientImpl$SendUpdateQueryCallable.call(LlapPluginEndpointClientImpl.java:93)
 ~[hive-exec-3.0.0.3.0.0.0-1101.jar:3.0.0.3.0.0.0-1101]
at 
com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
 [guava-19.0.jar:?]
at 
com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
 [guava-19.0.jar:?]
at 
com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
 [guava-19.0.jar:?]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_161]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_161]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
{noformat}


> incorrect token handling for LLAP plugin endpoint - part 2
> --
>
> Key: HIVE-19224
> URL: https://issues.apache.org/jira/browse/HIVE-19224
> Project: Hive
>  Issue Type: Bug
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19224.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.exec.tez.LlapPluginEndpointClientImpl.getTokenUser(LlapPluginEndpointClientImpl.java:77)
>  ~[hive-exec-3.0.0.3.0.0.0-1145.jar:3.0.0.3.0.0.0-1145]
> at 
> org.apache.hadoop.hive.llap.AsyncPbRpcProxy.createProxy(AsyncPbRpcProxy.java:447)
>  ~[hive-exec-3.0.0.3.0.0.0-1145.jar:3.0.0.3.0.0.0-1145]
> at 
> org.apache.hadoop.hive.llap.AsyncPbRpcProxy.access$100(AsyncPbRpcProxy.java:66)
>  ~[hive-exec-3.0.0.3.0.0.0-1145.jar:3.0.0.3.0.0.0-1145]
> at 
> org.apache.hadoop.hive.llap.AsyncPbRpcProxy$3.call(AsyncPbRpcProxy.java:429) 
> ~[hive-exec-3.0.0.3.0.0.0-1145.jar:3.0.0.3.0.0.0-1145]
> at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4793)
>  ~[guava-19.0.jar:?]
> at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3542)
>  ~[guava-19.0.jar:?]
> at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2323) 
> ~[guava-19.0.jar:?]
> at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2286)
>  ~[guava-19.0.jar:?]
> at 
> 

[jira] [Updated] (HIVE-19224) incorrect token handling for LLAP plugin endpoint - part 2

2018-04-16 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19224:

Status: Patch Available  (was: Open)

[~prasanth_j] can you take a look? Not sure which case produces nulls, added 
checks and warning logs

> incorrect token handling for LLAP plugin endpoint - part 2
> --
>
> Key: HIVE-19224
> URL: https://issues.apache.org/jira/browse/HIVE-19224
> Project: Hive
>  Issue Type: Bug
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19224.patch
>
>
> {noformat}
> java.lang.IllegalArgumentException: Null user
> at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2207) 
> ~[guava-19.0.jar:?]
> at com.google.common.cache.LocalCache.get(LocalCache.java:3953) 
> ~[guava-19.0.jar:?]
> at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4790) 
> ~[guava-19.0.jar:?]
> at 
> org.apache.hadoop.hive.llap.AsyncPbRpcProxy.getProxy(AsyncPbRpcProxy.java:425)
>  ~[hive-exec-3.0.0.3.0.0.0-1101.jar:3.0.0.3.0.0.0-1101]
> at 
> org.apache.hadoop.hive.ql.exec.tez.LlapPluginEndpointClientImpl.access$000(LlapPluginEndpointClientImpl.java:45)
>  ~[hive-exec-3.0.0.3.0.0.0-1101.jar:3.0.0.3.0.0.0-1101]
> at 
> org.apache.hadoop.hive.ql.exec.tez.LlapPluginEndpointClientImpl$SendUpdateQueryCallable.call(LlapPluginEndpointClientImpl.java:116)
>  ~[hive-exec-3.0.0.3.0.0.0-1101.jar:3.0.0.3.0.0.0-1101]
> at 
> org.apache.hadoop.hive.ql.exec.tez.LlapPluginEndpointClientImpl$SendUpdateQueryCallable.call(LlapPluginEndpointClientImpl.java:93)
>  ~[hive-exec-3.0.0.3.0.0.0-1101.jar:3.0.0.3.0.0.0-1101]
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
>  [guava-19.0.jar:?]
> at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
>  [guava-19.0.jar:?]
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
>  [guava-19.0.jar:?]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_161]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_161]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17647) DDLTask.generateAddMmTasks(Table tbl) and other random code should not start transactions

2018-04-16 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440180#comment-16440180
 ] 

Sergey Shelukhin commented on HIVE-17647:
-

[~ekoifman] can you review the changes? you've already looked at the initial 
patch :)

> DDLTask.generateAddMmTasks(Table tbl) and other random code should not start 
> transactions
> -
>
> Key: HIVE-17647
> URL: https://issues.apache.org/jira/browse/HIVE-17647
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17647.01.patch, HIVE-17647.02.patch, 
> HIVE-17647.patch
>
>
> This method (and other places) have 
> {noformat}
>   if (txnManager.isTxnOpen()) {
> mmWriteId = txnManager.getCurrentTxnId();
>   } else {
> mmWriteId = txnManager.openTxn(new Context(conf), conf.getUser());
> txnManager.commitTxn();
>   }
> {noformat}
> this should throw if there is no open transaction.  It should never open one.
> In general the logic seems suspect.  Looks like the intent is to move all 
> existing files into a delta_x_x/ when a plain table is converted to MM table. 
>  This seems like something that needs to be done from under an Exclusive lock 
> to prevent concurrent Insert operations writing data under table/partition 
> root.  But this is too late to acquire locks which should be done from the 
> Driver.acquireLocks()  (or else have deadlock detector since acquiring them 
> here would bread all-or-nothing lock acquisition semantics currently required 
> w/o deadlock detector)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19224) incorrect token handling for LLAP plugin endpoint - part 2

2018-04-16 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19224:

Attachment: HIVE-19224.patch

> incorrect token handling for LLAP plugin endpoint - part 2
> --
>
> Key: HIVE-19224
> URL: https://issues.apache.org/jira/browse/HIVE-19224
> Project: Hive
>  Issue Type: Bug
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19224.patch
>
>
> {noformat}
> java.lang.IllegalArgumentException: Null user
> at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2207) 
> ~[guava-19.0.jar:?]
> at com.google.common.cache.LocalCache.get(LocalCache.java:3953) 
> ~[guava-19.0.jar:?]
> at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4790) 
> ~[guava-19.0.jar:?]
> at 
> org.apache.hadoop.hive.llap.AsyncPbRpcProxy.getProxy(AsyncPbRpcProxy.java:425)
>  ~[hive-exec-3.0.0.3.0.0.0-1101.jar:3.0.0.3.0.0.0-1101]
> at 
> org.apache.hadoop.hive.ql.exec.tez.LlapPluginEndpointClientImpl.access$000(LlapPluginEndpointClientImpl.java:45)
>  ~[hive-exec-3.0.0.3.0.0.0-1101.jar:3.0.0.3.0.0.0-1101]
> at 
> org.apache.hadoop.hive.ql.exec.tez.LlapPluginEndpointClientImpl$SendUpdateQueryCallable.call(LlapPluginEndpointClientImpl.java:116)
>  ~[hive-exec-3.0.0.3.0.0.0-1101.jar:3.0.0.3.0.0.0-1101]
> at 
> org.apache.hadoop.hive.ql.exec.tez.LlapPluginEndpointClientImpl$SendUpdateQueryCallable.call(LlapPluginEndpointClientImpl.java:93)
>  ~[hive-exec-3.0.0.3.0.0.0-1101.jar:3.0.0.3.0.0.0-1101]
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
>  [guava-19.0.jar:?]
> at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
>  [guava-19.0.jar:?]
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
>  [guava-19.0.jar:?]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_161]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_161]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17970) MM LOAD DATA with OVERWRITE doesn't use base_n directory concept

2018-04-16 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440177#comment-16440177
 ] 

Sergey Shelukhin commented on HIVE-17970:
-

[~jdere] can you take a look at this one? thnx

> MM LOAD DATA with OVERWRITE doesn't use base_n directory concept
> 
>
> Key: HIVE-17970
> URL: https://issues.apache.org/jira/browse/HIVE-17970
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17970.01.patch, HIVE-17970.02.patch, 
> HIVE-17970.patch
>
>
> Judging by 
> {code:java}
> Hive.loadTable(Path loadPath, String tableName, LoadFileType loadFileType, 
> boolean isSrcLocal,
>   boolean isSkewedStoreAsSubdir, boolean isAcid, boolean 
> hasFollowingStatsTask,
>   Long txnId, int stmtId, boolean isMmTable)
> {code}
> LOAD DATA with OVERWRITE will delete all existing data then write new data 
> into the table.  This logic makes sense for non-acid tables but for Acid/MM 
> it should work like INSERT OVERWRITE statement and write new data to base_n/. 
> This way the lock manager can be used to either get an X lock for IOW and 
> thus block all readers or let it run with SemiShared and let readers continue 
> and make the system more concurrent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18609) Results cache invalidation based on ACID table updates

2018-04-16 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-18609:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed to master and branch-3.


> Results cache invalidation based on ACID table updates
> --
>
> Key: HIVE-18609
> URL: https://issues.apache.org/jira/browse/HIVE-18609
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 3.1.0
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-18609.1.patch, HIVE-18609.2.patch, 
> HIVE-18609.3.patch, HIVE-18609.4.patch
>
>
> Look into using the materialized view invalidation mechanisms to 
> automatically invalidate queries in the results cache if the underlying 
> tables used in the cached queries have been modified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18410) [Performance][Avro] Reading flat Avro tables is very expensive in Hive

2018-04-16 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440167#comment-16440167
 ] 

Ashutosh Chauhan commented on HIVE-18410:
-

+1

> [Performance][Avro] Reading flat Avro tables is very expensive in Hive
> --
>
> Key: HIVE-18410
> URL: https://issues.apache.org/jira/browse/HIVE-18410
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 1.2.1, 2.1.0, 3.0.0, 2.3.2
>Reporter: Ratandeep Ratti
>Assignee: Ratandeep Ratti
>Priority: Major
> Fix For: 2.3.2, 3.1.0
>
> Attachments: HIVE-18410.patch, HIVE-18410_1.patch, 
> HIVE-18410_2.patch, HIVE-18410_3.patch, profiling_with_patch.nps, 
> profiling_with_patch.png, profiling_without_patch.nps, 
> profiling_without_patch.png
>
>
> There's a performance penalty when reading flat [no nested fields] Avro 
> tables. When reading the same flat dataset in Pig, it takes half the time.  
> On profiling, a lot of time is spent in 
> {{AvroDeserializer.deserializeSingleItemNullableUnion()}}. The bulk of the 
> time is spent in GenericData.get().resolveUnion(), which calls 
> GenericData.getSchemaName(Object datum), which does a lot of instanceof 
> checks.  This could be simplified with performance benefits. A approach is 
> described in this patch which almost halves the runtime.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19211) New streaming ingest API and support for dynamic partitioning

2018-04-16 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440165#comment-16440165
 ] 

Prasanth Jayachandran commented on HIVE-19211:
--

[~ekoifman]/[~ashutoshc] can you please take a look?
- Removed hcatalog-core dependency
- New streaming API changes
- Added DP support to streaming API
- Removed unnecessary exception classses
- Add some more tests

> New streaming ingest API and support for dynamic partitioning
> -
>
> Key: HIVE-19211
> URL: https://issues.apache.org/jira/browse/HIVE-19211
> Project: Hive
>  Issue Type: Sub-task
>  Components: Streaming
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-19211.1.patch
>
>
> - New streaming API under new hive sub-module
> - Dynamic partitioning support
> - Delta file optimizations



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-19194) TestDruidStorageHandler fails

2018-04-16 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440138#comment-16440138
 ] 

slim bouguerra edited comment on HIVE-19194 at 4/16/18 11:30 PM:
-

*Issue*
 Bug is introduced by HIVE-19157. The bug is actually covered by the test case
{code:java}
 testCommitMultiInsertOverwriteTable{code}
Issue raises when we have an Insert overwrite statement with 0 rows and the 
code was existing without disabling the existing Druid DataSource. 
 *Fix*
 This patch adds a step to disable Druid Datasource if it is an insert 
overwrite statement.
 *Improvement*
 I have done some refactoring to make the code more readable and can be broken 
into small functions.


was (Author: bslim):
*Issue*
Bug is introduced by HIVE-19157. The bug is actually covered by the test case 
{code} testCommitMultiInsertOverwriteTable{code}
Issuer raises when we have an Insert overwrite statement with 0 rows and the 
code was existing without disabling the existing Druid DataSource. 
*Fix*
This patch adds a step to disable Druid Datasource if it is an insert overwrite 
statement.
*Improvement*
I have done some refactoring to make the code more readable and can be broken 
into small functions.


> TestDruidStorageHandler fails
> -
>
> Key: HIVE-19194
> URL: https://issues.apache.org/jira/browse/HIVE-19194
> Project: Hive
>  Issue Type: Sub-task
>  Components: Druid integration
>Reporter: Ashutosh Chauhan
>Assignee: slim bouguerra
>Priority: Major
> Attachments: HIVE-19194.patch
>
>
> This tests fails randomly. If its not reproducible locally consider improving 
> its stability since it does fail once in a while on Hive QA. 
> {code}
> java.lang.AssertionError: expected:<0> but was:<1> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:743) at 
> org.junit.Assert.assertEquals(Assert.java:118) at 
> org.junit.Assert.assertEquals(Assert.java:555) at 
> org.junit.Assert.assertEquals(Assert.java:542) at 
> org.apache.hadoop.hive.druid.TestDruidStorageHandler.testCommitMultiInsertOverwriteTable(TestDruidStorageHandler.java:414)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19194) TestDruidStorageHandler fails

2018-04-16 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440160#comment-16440160
 ] 

Ashutosh Chauhan commented on HIVE-19194:
-

+1

> TestDruidStorageHandler fails
> -
>
> Key: HIVE-19194
> URL: https://issues.apache.org/jira/browse/HIVE-19194
> Project: Hive
>  Issue Type: Sub-task
>  Components: Druid integration
>Reporter: Ashutosh Chauhan
>Assignee: slim bouguerra
>Priority: Major
> Attachments: HIVE-19194.patch
>
>
> This tests fails randomly. If its not reproducible locally consider improving 
> its stability since it does fail once in a while on Hive QA. 
> {code}
> java.lang.AssertionError: expected:<0> but was:<1> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:743) at 
> org.junit.Assert.assertEquals(Assert.java:118) at 
> org.junit.Assert.assertEquals(Assert.java:555) at 
> org.junit.Assert.assertEquals(Assert.java:542) at 
> org.apache.hadoop.hive.druid.TestDruidStorageHandler.testCommitMultiInsertOverwriteTable(TestDruidStorageHandler.java:414)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19202) CBO failed due to NullPointerException in HiveAggregate.isBucketedInput()

2018-04-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440159#comment-16440159
 ] 

Hive QA commented on HIVE-19202:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12919138/HIVE-19202.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 20 failed/errored test(s), 13437 tests 
executed
*Failed tests:*
{noformat}
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=95)


[jira] [Commented] (HIVE-19154) Poll notification events to invalidate the results cache

2018-04-16 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440154#comment-16440154
 ] 

Jason Dere commented on HIVE-19154:
---

Looks like the TestTriggersWorkloadManager may be happening on other ptest runs 
(https://builds.apache.org/job/PreCommit-HIVE-Build/10250/testReport/). I'm 
going to commit this one

> Poll notification events to invalidate the results cache
> 
>
> Key: HIVE-19154
> URL: https://issues.apache.org/jira/browse/HIVE-19154
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-19154.1.patch, HIVE-19154.2.patch, 
> HIVE-19154.3.patch
>
>
> Related to the work for HIVE-18609. HIVE-18609 will only invalidate entries 
> in the cache if that query looked up again, which could potentially leave a 
> lot of undetected invalid entries in the cache taking up space which could 
> cause other entries to be evicted. To remove these entries in a more timely 
> fashion, have a background thread to periodically check the notification 
> events for updates to the tables used in the results cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19009) Retain and use runtime statistics during hs2 lifetime

2018-04-16 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440150#comment-16440150
 ] 

Ashutosh Chauhan commented on HIVE-19009:
-

Can you update RB with your latest patch. Also some of the failures look 
related.

> Retain and use runtime statistics during hs2 lifetime
> -
>
> Key: HIVE-19009
> URL: https://issues.apache.org/jira/browse/HIVE-19009
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-19009.01.patch, HIVE-19009.02.patch, 
> HIVE-19009.03.patch, HIVE-19009.04.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19224) incorrect token handling for LLAP plugin endpoint - part 2

2018-04-16 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HIVE-19224:
---


> incorrect token handling for LLAP plugin endpoint - part 2
> --
>
> Key: HIVE-19224
> URL: https://issues.apache.org/jira/browse/HIVE-19224
> Project: Hive
>  Issue Type: Bug
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Sergey Shelukhin
>Priority: Major
> Fix For: 3.0.0
>
>
> {noformat}
> java.lang.IllegalArgumentException: Null user
> at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2207) 
> ~[guava-19.0.jar:?]
> at com.google.common.cache.LocalCache.get(LocalCache.java:3953) 
> ~[guava-19.0.jar:?]
> at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4790) 
> ~[guava-19.0.jar:?]
> at 
> org.apache.hadoop.hive.llap.AsyncPbRpcProxy.getProxy(AsyncPbRpcProxy.java:425)
>  ~[hive-exec-3.0.0.3.0.0.0-1101.jar:3.0.0.3.0.0.0-1101]
> at 
> org.apache.hadoop.hive.ql.exec.tez.LlapPluginEndpointClientImpl.access$000(LlapPluginEndpointClientImpl.java:45)
>  ~[hive-exec-3.0.0.3.0.0.0-1101.jar:3.0.0.3.0.0.0-1101]
> at 
> org.apache.hadoop.hive.ql.exec.tez.LlapPluginEndpointClientImpl$SendUpdateQueryCallable.call(LlapPluginEndpointClientImpl.java:116)
>  ~[hive-exec-3.0.0.3.0.0.0-1101.jar:3.0.0.3.0.0.0-1101]
> at 
> org.apache.hadoop.hive.ql.exec.tez.LlapPluginEndpointClientImpl$SendUpdateQueryCallable.call(LlapPluginEndpointClientImpl.java:93)
>  ~[hive-exec-3.0.0.3.0.0.0-1101.jar:3.0.0.3.0.0.0-1101]
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
>  [guava-19.0.jar:?]
> at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
>  [guava-19.0.jar:?]
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
>  [guava-19.0.jar:?]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_161]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_161]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19194) TestDruidStorageHandler fails

2018-04-16 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440138#comment-16440138
 ] 

slim bouguerra commented on HIVE-19194:
---

*Issue*
Bug is introduced by HIVE-19157. The bug is actually covered by the test case 
{code} testCommitMultiInsertOverwriteTable{code}
Issuer raises when we have an Insert overwrite statement with 0 rows and the 
code was existing without disabling the existing Druid DataSource. 
*Fix*
This patch adds a step to disable Druid Datasource if it is an insert overwrite 
statement.
*Improvement*
I have done some refactoring to make the code more readable and can be broken 
into small functions.


> TestDruidStorageHandler fails
> -
>
> Key: HIVE-19194
> URL: https://issues.apache.org/jira/browse/HIVE-19194
> Project: Hive
>  Issue Type: Sub-task
>  Components: Druid integration
>Reporter: Ashutosh Chauhan
>Assignee: slim bouguerra
>Priority: Major
> Attachments: HIVE-19194.patch
>
>
> This tests fails randomly. If its not reproducible locally consider improving 
> its stability since it does fail once in a while on Hive QA. 
> {code}
> java.lang.AssertionError: expected:<0> but was:<1> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:743) at 
> org.junit.Assert.assertEquals(Assert.java:118) at 
> org.junit.Assert.assertEquals(Assert.java:555) at 
> org.junit.Assert.assertEquals(Assert.java:542) at 
> org.apache.hadoop.hive.druid.TestDruidStorageHandler.testCommitMultiInsertOverwriteTable(TestDruidStorageHandler.java:414)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19195) Fix flaky tests and cleanup testconfiguration to run llap specific tests in llap only.

2018-04-16 Thread Deepak Jaiswal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440137#comment-16440137
 ] 

Deepak Jaiswal commented on HIVE-19195:
---

I did not update result file for llap_smb for TestMiniLlapCliDriver which 
resulted in failure. I removed it from the list after noticing it as it already 
runs in LlapLocal.

Will look into tez_smb_1 why it keeps changing plan.

Thanks [~ashutoshc] for pointing this out.

> Fix flaky tests and cleanup testconfiguration to run llap specific tests in 
> llap only.
> --
>
> Key: HIVE-19195
> URL: https://issues.apache.org/jira/browse/HIVE-19195
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Ashutosh Chauhan
>Assignee: Deepak Jaiswal
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-19195.1.patch
>
>
> This test is certainly flaky. Seems like it makes some assumption about 
> available memory. Consider dropping this altogether.
>  Also remove tests from llaplocal.shared list to llaplocal so that they dont 
> run in MR.
> Makes HIVE-17055 redundant.
> {code:java}
> Client Execution succeeded but contained differences (error code = 1) after 
> executing auto_sortmerge_join_2.q 
> 1101,1103d1100
> < Hive Runtime Error: Map local work exhausted memory
> < FAILED: Execution Error, return code 3 from 
> org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask
> < ATTEMPT: Execute BackupTask: org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19195) Fix flaky tests and cleanup testconfiguration to run llap specific tests in llap only.

2018-04-16 Thread Deepak Jaiswal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-19195:
--
Fix Version/s: 3.1.0

> Fix flaky tests and cleanup testconfiguration to run llap specific tests in 
> llap only.
> --
>
> Key: HIVE-19195
> URL: https://issues.apache.org/jira/browse/HIVE-19195
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Ashutosh Chauhan
>Assignee: Deepak Jaiswal
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-19195.1.patch
>
>
> This test is certainly flaky. Seems like it makes some assumption about 
> available memory. Consider dropping this altogether.
>  Also remove tests from llaplocal.shared list to llaplocal so that they dont 
> run in MR.
> Makes HIVE-17055 redundant.
> {code:java}
> Client Execution succeeded but contained differences (error code = 1) after 
> executing auto_sortmerge_join_2.q 
> 1101,1103d1100
> < Hive Runtime Error: Map local work exhausted memory
> < FAILED: Execution Error, return code 3 from 
> org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask
> < ATTEMPT: Execute BackupTask: org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19210) Create separate module for streaming ingest

2018-04-16 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-19210:
-
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

> Create separate module for streaming ingest
> ---
>
> Key: HIVE-19210
> URL: https://issues.apache.org/jira/browse/HIVE-19210
> Project: Hive
>  Issue Type: Sub-task
>  Components: Streaming
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Fix For: 3.0.0, 3.1.0
>
> Attachments: HIVE-19210-branch-3.patch, HIVE-19210.1.patch, 
> HIVE-19210.2.patch, HIVE-19210.3.patch
>
>
> This will retain the old hcat streaming API for old clients. The new 
> streaming ingest API will be separate module under hive. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19194) TestDruidStorageHandler fails

2018-04-16 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated HIVE-19194:
--
Attachment: HIVE-19194.patch

> TestDruidStorageHandler fails
> -
>
> Key: HIVE-19194
> URL: https://issues.apache.org/jira/browse/HIVE-19194
> Project: Hive
>  Issue Type: Sub-task
>  Components: Druid integration
>Reporter: Ashutosh Chauhan
>Assignee: slim bouguerra
>Priority: Major
> Attachments: HIVE-19194.patch
>
>
> This tests fails randomly. If its not reproducible locally consider improving 
> its stability since it does fail once in a while on Hive QA. 
> {code}
> java.lang.AssertionError: expected:<0> but was:<1> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:743) at 
> org.junit.Assert.assertEquals(Assert.java:118) at 
> org.junit.Assert.assertEquals(Assert.java:555) at 
> org.junit.Assert.assertEquals(Assert.java:542) at 
> org.apache.hadoop.hive.druid.TestDruidStorageHandler.testCommitMultiInsertOverwriteTable(TestDruidStorageHandler.java:414)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19211) New streaming ingest API and support for dynamic partitioning

2018-04-16 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-19211:
-
Attachment: HIVE-19211.1.patch

> New streaming ingest API and support for dynamic partitioning
> -
>
> Key: HIVE-19211
> URL: https://issues.apache.org/jira/browse/HIVE-19211
> Project: Hive
>  Issue Type: Sub-task
>  Components: Streaming
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-19211.1.patch
>
>
> - New streaming API under new hive sub-module
> - Dynamic partitioning support
> - Delta file optimizations



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19211) New streaming ingest API and support for dynamic partitioning

2018-04-16 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-19211:
-
Status: Patch Available  (was: Open)

> New streaming ingest API and support for dynamic partitioning
> -
>
> Key: HIVE-19211
> URL: https://issues.apache.org/jira/browse/HIVE-19211
> Project: Hive
>  Issue Type: Sub-task
>  Components: Streaming
>Affects Versions: 3.0.0, 3.1.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>Priority: Major
> Attachments: HIVE-19211.1.patch
>
>
> - New streaming API under new hive sub-module
> - Dynamic partitioning support
> - Delta file optimizations



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19194) TestDruidStorageHandler fails

2018-04-16 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated HIVE-19194:
--
Status: Patch Available  (was: Open)

> TestDruidStorageHandler fails
> -
>
> Key: HIVE-19194
> URL: https://issues.apache.org/jira/browse/HIVE-19194
> Project: Hive
>  Issue Type: Sub-task
>  Components: Druid integration
>Reporter: Ashutosh Chauhan
>Assignee: slim bouguerra
>Priority: Major
>
> This tests fails randomly. If its not reproducible locally consider improving 
> its stability since it does fail once in a while on Hive QA. 
> {code}
> java.lang.AssertionError: expected:<0> but was:<1> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:743) at 
> org.junit.Assert.assertEquals(Assert.java:118) at 
> org.junit.Assert.assertEquals(Assert.java:555) at 
> org.junit.Assert.assertEquals(Assert.java:542) at 
> org.apache.hadoop.hive.druid.TestDruidStorageHandler.testCommitMultiInsertOverwriteTable(TestDruidStorageHandler.java:414)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19194) TestDruidStorageHandler fails

2018-04-16 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440130#comment-16440130
 ] 

slim bouguerra commented on HIVE-19194:
---

This patch has a fix for the breakage due to patch 
https://issues.apache.org/jira/browse/HIVE-19157


> TestDruidStorageHandler fails
> -
>
> Key: HIVE-19194
> URL: https://issues.apache.org/jira/browse/HIVE-19194
> Project: Hive
>  Issue Type: Sub-task
>  Components: Druid integration
>Reporter: Ashutosh Chauhan
>Assignee: slim bouguerra
>Priority: Major
>
> This tests fails randomly. If its not reproducible locally consider improving 
> its stability since it does fail once in a while on Hive QA. 
> {code}
> java.lang.AssertionError: expected:<0> but was:<1> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:743) at 
> org.junit.Assert.assertEquals(Assert.java:118) at 
> org.junit.Assert.assertEquals(Assert.java:555) at 
> org.junit.Assert.assertEquals(Assert.java:542) at 
> org.apache.hadoop.hive.druid.TestDruidStorageHandler.testCommitMultiInsertOverwriteTable(TestDruidStorageHandler.java:414)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19124) implement a basic major compactor for MM tables

2018-04-16 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19124:

Attachment: (was: HIVE-19124.02.WIP.patch)

> implement a basic major compactor for MM tables
> ---
>
> Key: HIVE-19124
> URL: https://issues.apache.org/jira/browse/HIVE-19124
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-19124.01.patch, HIVE-19124.02.patch, 
> HIVE-19124.patch
>
>
> For now, it will run a query directly and only major compactions will be 
> supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19124) implement a basic major compactor for MM tables

2018-04-16 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440125#comment-16440125
 ] 

Sergey Shelukhin commented on HIVE-19124:
-

[~ekoifman] [~gopalv] can you please review? the latest patch addresses the new 
transaction creation issue.

> implement a basic major compactor for MM tables
> ---
>
> Key: HIVE-19124
> URL: https://issues.apache.org/jira/browse/HIVE-19124
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-19124.01.patch, HIVE-19124.02.patch, 
> HIVE-19124.patch
>
>
> For now, it will run a query directly and only major compactions will be 
> supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19124) implement a basic major compactor for MM tables

2018-04-16 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-19124:

Attachment: HIVE-19124.02.patch

> implement a basic major compactor for MM tables
> ---
>
> Key: HIVE-19124
> URL: https://issues.apache.org/jira/browse/HIVE-19124
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-19124.01.patch, HIVE-19124.02.WIP.patch, 
> HIVE-19124.02.patch, HIVE-19124.patch
>
>
> For now, it will run a query directly and only major compactions will be 
> supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19124) implement a basic major compactor for MM tables

2018-04-16 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440123#comment-16440123
 ] 

Sergey Shelukhin commented on HIVE-19124:
-

Fixed some issues and added a test.

> implement a basic major compactor for MM tables
> ---
>
> Key: HIVE-19124
> URL: https://issues.apache.org/jira/browse/HIVE-19124
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-19124.01.patch, HIVE-19124.02.WIP.patch, 
> HIVE-19124.02.patch, HIVE-19124.patch
>
>
> For now, it will run a query directly and only major compactions will be 
> supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19204) Detailed errors from some tasks are not displayed to the client because the tasks don't set exception when they fail

2018-04-16 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440121#comment-16440121
 ] 

Aihua Xu commented on HIVE-19204:
-

[~stakiar_impala_496e] Can you help review this change? With this change, the 
client will get more error details for some tasks rather than just get the 
error code.

> Detailed errors from some tasks are not displayed to the client because the 
> tasks don't set exception when they fail
> 
>
> Key: HIVE-19204
> URL: https://issues.apache.org/jira/browse/HIVE-19204
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 3.0.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Attachments: HIVE-19204.1.patch
>
>
> In TaskRunner.java, if the tasks have exception set, then the task result 
> will have such exception set and Driver.java will get such details and 
> display to the client. But some tasks don't set such exceptions so the client 
> won't see such details unless you check the HS2 log.
>   
> {noformat}
>   public void runSequential() {
> int exitVal = -101;
> try {
>   exitVal = tsk.executeTask(ss == null ? null : ss.getHiveHistory());
> } catch (Throwable t) {
>   if (tsk.getException() == null) {
> tsk.setException(t);
>   }
>   LOG.error("Error in executeTask", t);
> }
> result.setExitVal(exitVal);
> if (tsk.getException() != null) {
>   result.setTaskError(tsk.getException());
> }
>   }
>  {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19001) ALTER TABLE ADD CONSTRAINT support for CHECK constraint

2018-04-16 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19001:
---
Status: Patch Available  (was: Open)

Missed up-gradation schema changes. Latest patch (3) fixes that.

> ALTER TABLE ADD CONSTRAINT support for CHECK constraint
> ---
>
> Key: HIVE-19001
> URL: https://issues.apache.org/jira/browse/HIVE-19001
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19001.1.patch, HIVE-19001.2.patch, 
> HIVE-19001.3.patch
>
>
> ALTER TABLE ADD CONSTRAINT should be able to add CHECK constraint (table 
> level)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19001) ALTER TABLE ADD CONSTRAINT support for CHECK constraint

2018-04-16 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19001:
---
Status: Open  (was: Patch Available)

> ALTER TABLE ADD CONSTRAINT support for CHECK constraint
> ---
>
> Key: HIVE-19001
> URL: https://issues.apache.org/jira/browse/HIVE-19001
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19001.1.patch, HIVE-19001.2.patch, 
> HIVE-19001.3.patch
>
>
> ALTER TABLE ADD CONSTRAINT should be able to add CHECK constraint (table 
> level)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19001) ALTER TABLE ADD CONSTRAINT support for CHECK constraint

2018-04-16 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19001:
---
Attachment: HIVE-19001.3.patch

> ALTER TABLE ADD CONSTRAINT support for CHECK constraint
> ---
>
> Key: HIVE-19001
> URL: https://issues.apache.org/jira/browse/HIVE-19001
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19001.1.patch, HIVE-19001.2.patch, 
> HIVE-19001.3.patch
>
>
> ALTER TABLE ADD CONSTRAINT should be able to add CHECK constraint (table 
> level)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19202) CBO failed due to NullPointerException in HiveAggregate.isBucketedInput()

2018-04-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440100#comment-16440100
 ] 

Hive QA commented on HIVE-19202:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
15s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 20s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10257/dev-support/hive-personality.sh
 |
| git revision | master / 6afa544 |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10257/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10257/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> CBO failed due to NullPointerException in HiveAggregate.isBucketedInput()
> -
>
> Key: HIVE-19202
> URL: https://issues.apache.org/jira/browse/HIVE-19202
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Affects Versions: 2.1.1
>Reporter: zhuwei
>Assignee: zhuwei
>Priority: Critical
> Fix For: 2.1.1
>
> Attachments: HIVE-19202.1.patch, HIVE-19202.2.patch
>
>
> I ran a query with join and group by with below settings, COB failed due to 
> NullPointerException in HiveAggregate.isBucketedInput()
> set hive.execution.engine=tez;
> set hive.cbo.costmodel.extended=true;
>  
> In class HiveRelMdDistribution, we implemented below functions:
> public RelDistribution distribution(HiveAggregate aggregate, RelMetadataQuery 
> mq)
> public RelDistribution distribution(HiveJoin join, RelMetadataQuery mq)
>  
> But in HiveAggregate.isBucketedInput, the argument passed to distribution is 
> "this.getInput()"
> , obviously it's not right here. The right argument needed is "this"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19222) TestNegativeCliDriver tests are failing due to "java.lang.OutOfMemoryError: GC overhead limit exceeded"

2018-04-16 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440096#comment-16440096
 ] 

Aihua Xu commented on HIVE-19222:
-

I will test this change to workaround the issue if it works since it's blocking 
the commit. At the meantime we can find out root cause and revert this.

> TestNegativeCliDriver tests are failing due to "java.lang.OutOfMemoryError: 
> GC overhead limit exceeded"
> ---
>
> Key: HIVE-19222
> URL: https://issues.apache.org/jira/browse/HIVE-19222
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Attachments: HIVE-19222.1.patch
>
>
> TestNegativeCliDriver tests are failing with OOM recently. Not sure why. I 
> will try to increase the memory to test out.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18690) Integrate with Spark OutputMetrics

2018-04-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440072#comment-16440072
 ] 

Hive QA commented on HIVE-18690:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12919135/HIVE-18690.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 245 failed/errored test(s), 14230 tests 
executed
*Failed tests:*
{noformat}
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=217)
TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_smb] (batchId=92)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] 
(batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[tez_join_hash] 
(batchId=54)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[default_constraint]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[auto_sortmerge_join_16]
 (batchId=183)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[infer_bucket_sort_num_buckets]
 (batchId=183)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[list_bucket_dml_10]
 (batchId=182)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[orc_merge1]
 (batchId=182)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[orc_merge2]
 (batchId=185)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[orc_merge7]
 (batchId=185)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[orc_merge_diff_fs]
 (batchId=182)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[orc_merge_incompat2]
 (batchId=185)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[spark_dynamic_partition_pruning]
 (batchId=182)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[infer_bucket_sort_dyn_part]
 (batchId=93)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[infer_bucket_sort_map_operators]
 (batchId=93)
org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver[infer_bucket_sort_num_buckets]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.org.apache.hadoop.hive.cli.TestNegativeCliDriver
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.org.apache.hadoop.hive.cli.TestNegativeCliDriver
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_notnull_constraint_violation]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[default_constraint_invalid_default_value_type]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into3] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into4] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into5] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into_acid_notnull]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into_notnull_constraint]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_multi_into_notnull]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_overwrite_notnull_constraint]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_sorted] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insertsel_fail] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[joinneg] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[lockneg1] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[lockneg3] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[lockneg4] 
(batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[materialized_view_authorization_create_no_select_perm]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[materialized_view_authorization_rebuild_no_grant]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[materialized_view_authorization_rebuild_other]
 (batchId=95)

[jira] [Commented] (HIVE-19001) ALTER TABLE ADD CONSTRAINT support for CHECK constraint

2018-04-16 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440062#comment-16440062
 ] 

Ashutosh Chauhan commented on HIVE-19001:
-

+1

> ALTER TABLE ADD CONSTRAINT support for CHECK constraint
> ---
>
> Key: HIVE-19001
> URL: https://issues.apache.org/jira/browse/HIVE-19001
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19001.1.patch, HIVE-19001.2.patch
>
>
> ALTER TABLE ADD CONSTRAINT should be able to add CHECK constraint (table 
> level)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19160) Insert data into decimal column fails with Null Pointer Exception

2018-04-16 Thread Janaki Lahorani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janaki Lahorani updated HIVE-19160:
---
Attachment: HIVE-19160.3.patch

> Insert data into decimal column fails with Null Pointer Exception
> -
>
> Key: HIVE-19160
> URL: https://issues.apache.org/jira/browse/HIVE-19160
> Project: Hive
>  Issue Type: Bug
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-19160.1.patch, HIVE-19160.2.patch, 
> HIVE-19160.3.patch
>
>
> drop table if exists testDecimal;
> create table testDecimal
> (cIdTINYINT,
>  cBigIntDECIMAL,
>  cInt   DECIMAL,
>  cSmallInt  DECIMAL,
>  cTinyint   DECIMAL);
> insert into testDecimal values
> (1,
>  1234567890123456789,
>  1234567890,
>  12345,
>  123);
> insert into testDecimal values
> (2,
>  1,
>  2,
>  3,
>  4);
> The second insert fails with null pointer exception.
> 2018-04-10T15:23:23,080 ERROR [5dba40ef-be49-4187-8a72-afbb46c41ecc main] 
> metastore.RetryingHMSHandler: java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.metastore.api.Decimal.compareTo(Decimal.java:318)
>   at 
> org.apache.hadoop.hive.metastore.columnstats.merge.DecimalColumnStatsMerger.merge(DecimalColumnStatsMerger.java:35)
>   at 
> org.apache.hadoop.hive.metastore.utils.MetaStoreUtils.mergeColStats(MetaStoreUtils.java:1040)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.set_aggr_stats_for(HiveMetaStore.java:7166)
>   at sun.reflect.GeneratedMethodAccessor43.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
>   at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
>   at com.sun.proxy.$Proxy40.set_aggr_stats_for(Unknown Source)
>   at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.setPartitionColumnStatistics(HiveMetaStoreClient.java:1870)
>   at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.setPartitionColumnStatistics(SessionHiveMetaStoreClient.java:395)
>   at sun.reflect.GeneratedMethodAccessor42.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212)
>   at com.sun.proxy.$Proxy41.setPartitionColumnStatistics(Unknown Source)
>   at 
> org.apache.hadoop.hive.ql.metadata.Hive.setPartitionColumnStatistics(Hive.java:4171)
>   at 
> org.apache.hadoop.hive.ql.stats.ColStatsProcessor.persistColumnStats(ColStatsProcessor.java:179)
>   at 
> org.apache.hadoop.hive.ql.stats.ColStatsProcessor.process(ColStatsProcessor.java:83)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18915) Better client logging when a HoS session can't be opened

2018-04-16 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440044#comment-16440044
 ] 

Aihua Xu commented on HIVE-18915:
-

[~stakiar] Can you take a look at this simple change?

> Better client logging when a HoS session can't be opened
> 
>
> Key: HIVE-18915
> URL: https://issues.apache.org/jira/browse/HIVE-18915
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Affects Versions: 3.0.0
>Reporter: Sahil Takiar
>Assignee: Aihua Xu
>Priority: Major
> Attachments: HIVE-18915.1.patch, HIVE-18915.2.patch, 
> HIVE-18915.3.patch
>
>
> Users just get a {{FAILED: Execution Error, return code 30041 from 
> org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create Spark client 
> for Spark session [id]}} when a HoS session can't be opened, would be better 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19001) ALTER TABLE ADD CONSTRAINT support for CHECK constraint

2018-04-16 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19001:
---
Status: Open  (was: Patch Available)

> ALTER TABLE ADD CONSTRAINT support for CHECK constraint
> ---
>
> Key: HIVE-19001
> URL: https://issues.apache.org/jira/browse/HIVE-19001
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19001.1.patch, HIVE-19001.2.patch
>
>
> ALTER TABLE ADD CONSTRAINT should be able to add CHECK constraint (table 
> level)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19001) ALTER TABLE ADD CONSTRAINT support for CHECK constraint

2018-04-16 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19001:
---
Attachment: HIVE-19001.2.patch

> ALTER TABLE ADD CONSTRAINT support for CHECK constraint
> ---
>
> Key: HIVE-19001
> URL: https://issues.apache.org/jira/browse/HIVE-19001
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19001.1.patch, HIVE-19001.2.patch
>
>
> ALTER TABLE ADD CONSTRAINT should be able to add CHECK constraint (table 
> level)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19001) ALTER TABLE ADD CONSTRAINT support for CHECK constraint

2018-04-16 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440039#comment-16440039
 ] 

Vineet Garg commented on HIVE-19001:


[~ashutoshc] No this patch only provide support for alter table add constraint.

> ALTER TABLE ADD CONSTRAINT support for CHECK constraint
> ---
>
> Key: HIVE-19001
> URL: https://issues.apache.org/jira/browse/HIVE-19001
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19001.1.patch, HIVE-19001.2.patch
>
>
> ALTER TABLE ADD CONSTRAINT should be able to add CHECK constraint (table 
> level)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19001) ALTER TABLE ADD CONSTRAINT support for CHECK constraint

2018-04-16 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19001:
---
Status: Patch Available  (was: Open)

> ALTER TABLE ADD CONSTRAINT support for CHECK constraint
> ---
>
> Key: HIVE-19001
> URL: https://issues.apache.org/jira/browse/HIVE-19001
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19001.1.patch, HIVE-19001.2.patch
>
>
> ALTER TABLE ADD CONSTRAINT should be able to add CHECK constraint (table 
> level)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18986) Table rename will run java.lang.StackOverflowError in dataNucleus if the table contains large number of columns

2018-04-16 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440033#comment-16440033
 ] 

Aihua Xu commented on HIVE-18986:
-

[~ychena] Can you take a look at the latest patch? I have tried a couple of 
times and the failed tests are not caused by this change. 

> Table rename will run java.lang.StackOverflowError in dataNucleus if the 
> table contains large number of columns
> ---
>
> Key: HIVE-18986
> URL: https://issues.apache.org/jira/browse/HIVE-18986
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-18986.1.patch, HIVE-18986.2.patch, 
> HIVE-18986.3.patch
>
>
> If the table contains a lot of columns e.g, 5k, simple table rename would 
> fail with the following stack trace. The issue is datanucleus can't handle 
> the query with lots of colName='c1' && colName='c2' && ... .
>  
> 2018-03-13 17:19:52,770 INFO 
> org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-5-thread-200]: 
> ugi=anonymous ip=10.17.100.135 cmd=source:10.17.100.135 alter_table: 
> db=default tbl=fgv_full_var_pivoted02 newtbl=fgv_full_var_pivoted 2018-03-13 
> 17:20:00,495 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: 
> [pool-5-thread-200]: java.lang.StackOverflowError at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:330) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at 
> org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19222) TestNegativeCliDriver tests are failing due to "java.lang.OutOfMemoryError: GC overhead limit exceeded"

2018-04-16 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440027#comment-16440027
 ] 

Aihua Xu commented on HIVE-19222:
-

Created HIVE-19223 to track the test migration. That is a good idea.

> TestNegativeCliDriver tests are failing due to "java.lang.OutOfMemoryError: 
> GC overhead limit exceeded"
> ---
>
> Key: HIVE-19222
> URL: https://issues.apache.org/jira/browse/HIVE-19222
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Attachments: HIVE-19222.1.patch
>
>
> TestNegativeCliDriver tests are failing with OOM recently. Not sure why. I 
> will try to increase the memory to test out.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19197) TestReplicationScenarios is flaky

2018-04-16 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16440024#comment-16440024
 ] 

Thejas M Nair commented on HIVE-19197:
--

[~maheshk114] Can you please review ?


> TestReplicationScenarios is flaky
> -
>
> Key: HIVE-19197
> URL: https://issues.apache.org/jira/browse/HIVE-19197
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl, Test
>Reporter: Ashutosh Chauhan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-19197.01.patch
>
>
> Fails once in a while.
> {code}
> java.lang.AssertionError: expected:<1> but was:<0>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.verifyResults(TestReplicationScenarios.java:3629)
>   at 
> org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.verifyRun(TestReplicationScenarios.java:3711)
>   at 
> org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.verifyRun(TestReplicationScenarios.java:3706)
>   at 
> org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.verifyAndReturnTblReplStatus(TestReplicationScenarios.java:3600)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-19149) Vulnerability CVE-2018-1284, CVE-2018-1282, CVE-2018-1315

2018-04-16 Thread Thejas M Nair (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thejas M Nair resolved HIVE-19149.
--
Resolution: Invalid

> Vulnerability CVE-2018-1284, CVE-2018-1282, CVE-2018-1315
> -
>
> Key: HIVE-19149
> URL: https://issues.apache.org/jira/browse/HIVE-19149
> Project: Hive
>  Issue Type: Bug
>Reporter: Rohit Persai
>Priority: Major
>
> Need a fix for below Vulnerabilities for Hive
> CVE-2018-1284,
> CVE-2018-1282,
> CVE-2018-1315



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19222) TestNegativeCliDriver tests are failing due to "java.lang.OutOfMemoryError: GC overhead limit exceeded"

2018-04-16 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439998#comment-16439998
 ] 

Gopal V commented on HIVE-19222:


Is there a heapdump collected somewhere on the test machines?

> TestNegativeCliDriver tests are failing due to "java.lang.OutOfMemoryError: 
> GC overhead limit exceeded"
> ---
>
> Key: HIVE-19222
> URL: https://issues.apache.org/jira/browse/HIVE-19222
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Attachments: HIVE-19222.1.patch
>
>
> TestNegativeCliDriver tests are failing with OOM recently. Not sure why. I 
> will try to increase the memory to test out.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19222) TestNegativeCliDriver tests are failing due to "java.lang.OutOfMemoryError: GC overhead limit exceeded"

2018-04-16 Thread Aihua Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439996#comment-16439996
 ] 

Aihua Xu commented on HIVE-19222:
-

[~sershe] That's good to know. We can group errors into fewer tests. HIVE-19000 
has a little more insights. 

> TestNegativeCliDriver tests are failing due to "java.lang.OutOfMemoryError: 
> GC overhead limit exceeded"
> ---
>
> Key: HIVE-19222
> URL: https://issues.apache.org/jira/browse/HIVE-19222
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Attachments: HIVE-19222.1.patch
>
>
> TestNegativeCliDriver tests are failing with OOM recently. Not sure why. I 
> will try to increase the memory to test out.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18690) Integrate with Spark OutputMetrics

2018-04-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439995#comment-16439995
 ] 

Hive QA commented on HIVE-18690:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
1s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  1m  
3s{color} | {color:red} ql in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
45s{color} | {color:red} ql: The patch generated 3 new + 68 unchanged - 13 
fixed = 71 total (was 81) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
10s{color} | {color:red} spark-client: The patch generated 5 new + 20 unchanged 
- 0 fixed = 25 total (was 20) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
16s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10256/dev-support/hive-personality.sh
 |
| git revision | master / 6afa544 |
| Default Java | 1.8.0_111 |
| mvninstall | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10256/yetus/patch-mvninstall-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10256/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10256/yetus/diff-checkstyle-spark-client.txt
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10256/yetus/patch-asflicense-problems.txt
 |
| modules | C: ql spark-client U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10256/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Integrate with Spark OutputMetrics
> --
>
> Key: HIVE-18690
> URL: https://issues.apache.org/jira/browse/HIVE-18690
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18690.1.patch
>
>
> Spark has an {{OutputMetrics}} it uses to expose records / bytes written. We 
> currently don't integrate with it and the Spark UI shows a blank value for 
> output records / bytes. We have our own customer accumulators instead (like 
> {{HIVE_RECORDS_OUT}}).
> Spark exposes the {{OutputMetrics}} object inside individual tasks via the 
> {{TaskContext.get()}} method. We can use this method to access the 
> {{OutputMetrics}} object and update it.



--
This message was sent by Atlassian JIRA

[jira] [Commented] (HIVE-17647) DDLTask.generateAddMmTasks(Table tbl) and other random code should not start transactions

2018-04-16 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439988#comment-16439988
 ] 

Sergey Shelukhin commented on HIVE-17647:
-

Rebased the patch. [~ekoifman] can you review?

> DDLTask.generateAddMmTasks(Table tbl) and other random code should not start 
> transactions
> -
>
> Key: HIVE-17647
> URL: https://issues.apache.org/jira/browse/HIVE-17647
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17647.01.patch, HIVE-17647.02.patch, 
> HIVE-17647.patch
>
>
> This method (and other places) have 
> {noformat}
>   if (txnManager.isTxnOpen()) {
> mmWriteId = txnManager.getCurrentTxnId();
>   } else {
> mmWriteId = txnManager.openTxn(new Context(conf), conf.getUser());
> txnManager.commitTxn();
>   }
> {noformat}
> this should throw if there is no open transaction.  It should never open one.
> In general the logic seems suspect.  Looks like the intent is to move all 
> existing files into a delta_x_x/ when a plain table is converted to MM table. 
>  This seems like something that needs to be done from under an Exclusive lock 
> to prevent concurrent Insert operations writing data under table/partition 
> root.  But this is too late to acquire locks which should be done from the 
> Driver.acquireLocks()  (or else have deadlock detector since acquiring them 
> here would bread all-or-nothing lock acquisition semantics currently required 
> w/o deadlock detector)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17647) DDLTask.generateAddMmTasks(Table tbl) and other random code should not start transactions

2018-04-16 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-17647:

Attachment: HIVE-17647.02.patch

> DDLTask.generateAddMmTasks(Table tbl) and other random code should not start 
> transactions
> -
>
> Key: HIVE-17647
> URL: https://issues.apache.org/jira/browse/HIVE-17647
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17647.01.patch, HIVE-17647.02.patch, 
> HIVE-17647.patch
>
>
> This method (and other places) have 
> {noformat}
>   if (txnManager.isTxnOpen()) {
> mmWriteId = txnManager.getCurrentTxnId();
>   } else {
> mmWriteId = txnManager.openTxn(new Context(conf), conf.getUser());
> txnManager.commitTxn();
>   }
> {noformat}
> this should throw if there is no open transaction.  It should never open one.
> In general the logic seems suspect.  Looks like the intent is to move all 
> existing files into a delta_x_x/ when a plain table is converted to MM table. 
>  This seems like something that needs to be done from under an Exclusive lock 
> to prevent concurrent Insert operations writing data under table/partition 
> root.  But this is too late to acquire locks which should be done from the 
> Driver.acquireLocks()  (or else have deadlock detector since acquiring them 
> here would bread all-or-nothing lock acquisition semantics currently required 
> w/o deadlock detector)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19038) LLAP: Service loader throws "Provider not found" exception if hive-llap-server is in class path while loading tokens

2018-04-16 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HIVE-19038:
---
Fix Version/s: (was: 3.1.0)
   3.0.0

> LLAP: Service loader throws "Provider not found" exception if 
> hive-llap-server is in class path while loading tokens
> 
>
> Key: HIVE-19038
> URL: https://issues.apache.org/jira/browse/HIVE-19038
> Project: Hive
>  Issue Type: Bug
>Reporter: Arun Mahadevan
>Assignee: Arun Mahadevan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.0.0
>
>
> While testing storm in secure mode, the hive-llap-server jar file was 
> included in the class path and resulted in the below exception while trying 
> to renew credentials when invoking 
> "org.apache.hadoop.security.token.Token.getRenewer"
>  
>  
> {noformat}
> java.util.ServiceConfigurationError: 
> org.apache.hadoop.security.token.TokenRenewer: Provider 
> org.apache.hadoop.hive.llap.security.LlapTokenIdentifier.Renewer not found at 
> java.util.ServiceLoader.fail(ServiceLoader.java:239) ~[?:1.8.0_161] at 
> java.util.ServiceLoader.access$300(ServiceLoader.java:185) ~[?:1.8.0_161] at 
> java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:372) 
> ~[?:1.8.0_161] at 
> java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404) 
> ~[?:1.8.0_161] at 
> java.util.ServiceLoader$1.next(ServiceLoader.java:480) ~[?:1.8.0_161] at 
> org.apache.hadoop.security.token.Token.getRenewer(Token.java:463) 
> ~[hadoop-common-3.0.0.3.0.0.0-1064.jar:?] at 
> org.apache.hadoop.security.token.Token.renew(Token.java:490) 
> ~[hadoop-common-3.0.0.3.0.0.0-1064.jar:?] at 
> org.apache.storm.hdfs.security.AutoHDFS.doRenew(AutoHDFS.java:159) 
> ~[storm-autocreds-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.common.AbstractAutoCreds.renew(AbstractAutoCreds.java:104) 
> ~[storm-autocreds-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_161] at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_161] at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_161] at java.lang.reflect.Method.invoke(Method.java:498) 
> ~[?:1.8.0_161] at 
> clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93) 
> ~[clojure-1.7.0.jar:?] at 
> clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:28) 
> ~[clojure-1.7.0.jar:?] at 
> org.apache.storm.daemon.nimbus$renew_credentials$fn__9121$fn__9126.invoke(nimbus.clj:1450)
>  ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.daemon.nimbus$renew_credentials$fn__9121.invoke(nimbus.clj:1449)
>  ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.daemon.nimbus$renew_credentials.invoke(nimbus.clj:1439) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.daemon.nimbus$fn__9547$exec_fn__3301__auto9548$fn__9567.invoke(nimbus.clj:2521)
>  ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$schedule_recurring$this__1656.invoke(timer.clj:105) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639$fn__1640.invoke(timer.clj:50) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639.invoke(timer.clj:42) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> clojure.lang.AFn.run(AFn.java:22) ~[clojure-1.7.0.jar:?] at 
> java.lang.Thread.run(Thread.java:748) [?:1.8.0_161] 2018-03-22 22:08:59.088 
> o.a.s.util timer [ERROR] Halting process: ("Error when processing an event") 
> java.lang.RuntimeException: ("Error when processing an event") at 
> org.apache.storm.util$exit_process_BANG_.doInvoke(util.clj:341) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> clojure.lang.RestFn.invoke(RestFn.java:423) ~[clojure-1.7.0.jar:?] at 
> org.apache.storm.daemon.nimbus$nimbus_data$fn__8334.invoke(nimbus.clj:221) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639$fn__1640.invoke(timer.clj:71) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> org.apache.storm.timer$mk_timer$fn__1639.invoke(timer.clj:42) 
> ~[storm-core-1.2.1.3.0.0.0-1064.jar:1.2.1.3.0.0.0-1064] at 
> clojure.lang.AFn.run(AFn.java:22) ~[clojure-1.7.0.jar:?] at 
> java.lang.Thread.run(Thread.java:748) [?:1.8.0_161]{noformat}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-19195) Fix flaky tests and cleanup testconfiguration to run llap specific tests in llap only.

2018-04-16 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439978#comment-16439978
 ] 

Ashutosh Chauhan edited comment on HIVE-19195 at 4/16/18 8:21 PM:
--

TestMiniLlapCliDriver.testCliDriver[llap_smb]  & 
TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1]  failed in run of this 
patch. Doesn't seem like falkiness is resolved yet.


was (Author: ashutoshc):
TestMiniLlapCliDriver.testCliDriver[llap_smb] failed in run of this patch. 
Doesn't seem like falkiness is resolved yet.

> Fix flaky tests and cleanup testconfiguration to run llap specific tests in 
> llap only.
> --
>
> Key: HIVE-19195
> URL: https://issues.apache.org/jira/browse/HIVE-19195
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Ashutosh Chauhan
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-19195.1.patch
>
>
> This test is certainly flaky. Seems like it makes some assumption about 
> available memory. Consider dropping this altogether.
>  Also remove tests from llaplocal.shared list to llaplocal so that they dont 
> run in MR.
> Makes HIVE-17055 redundant.
> {code:java}
> Client Execution succeeded but contained differences (error code = 1) after 
> executing auto_sortmerge_join_2.q 
> 1101,1103d1100
> < Hive Runtime Error: Map local work exhausted memory
> < FAILED: Execution Error, return code 3 from 
> org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask
> < ATTEMPT: Execute BackupTask: org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19195) Fix flaky tests and cleanup testconfiguration to run llap specific tests in llap only.

2018-04-16 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439978#comment-16439978
 ] 

Ashutosh Chauhan commented on HIVE-19195:
-

TestMiniLlapCliDriver.testCliDriver[llap_smb] failed in run of this patch. 
Doesn't seem like falkiness is resolved yet.

> Fix flaky tests and cleanup testconfiguration to run llap specific tests in 
> llap only.
> --
>
> Key: HIVE-19195
> URL: https://issues.apache.org/jira/browse/HIVE-19195
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Ashutosh Chauhan
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-19195.1.patch
>
>
> This test is certainly flaky. Seems like it makes some assumption about 
> available memory. Consider dropping this altogether.
>  Also remove tests from llaplocal.shared list to llaplocal so that they dont 
> run in MR.
> Makes HIVE-17055 redundant.
> {code:java}
> Client Execution succeeded but contained differences (error code = 1) after 
> executing auto_sortmerge_join_2.q 
> 1101,1103d1100
> < Hive Runtime Error: Map local work exhausted memory
> < FAILED: Execution Error, return code 3 from 
> org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask
> < ATTEMPT: Execute BackupTask: org.apache.hadoop.hive.ql.exec.mr.MapRedTask
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-12369) Native Vector GroupBy (Part 1)

2018-04-16 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439976#comment-16439976
 ] 

Hive QA commented on HIVE-12369:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12919129/HIVE-12369.095.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 83 failed/errored test(s), 13826 tests 
executed
*Failed tests:*
{noformat}
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=96)


[jira] [Commented] (HIVE-17970) MM LOAD DATA with OVERWRITE doesn't use base_n directory concept

2018-04-16 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16439977#comment-16439977
 ] 

Sergey Shelukhin commented on HIVE-17970:
-

Rebased the patch


> MM LOAD DATA with OVERWRITE doesn't use base_n directory concept
> 
>
> Key: HIVE-17970
> URL: https://issues.apache.org/jira/browse/HIVE-17970
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17970.01.patch, HIVE-17970.02.patch, 
> HIVE-17970.patch
>
>
> Judging by 
> {code:java}
> Hive.loadTable(Path loadPath, String tableName, LoadFileType loadFileType, 
> boolean isSrcLocal,
>   boolean isSkewedStoreAsSubdir, boolean isAcid, boolean 
> hasFollowingStatsTask,
>   Long txnId, int stmtId, boolean isMmTable)
> {code}
> LOAD DATA with OVERWRITE will delete all existing data then write new data 
> into the table.  This logic makes sense for non-acid tables but for Acid/MM 
> it should work like INSERT OVERWRITE statement and write new data to base_n/. 
> This way the lock manager can be used to either get an X lock for IOW and 
> thus block all readers or let it run with SemiShared and let readers continue 
> and make the system more concurrent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17970) MM LOAD DATA with OVERWRITE doesn't use base_n directory concept

2018-04-16 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-17970:

Attachment: HIVE-17970.02.patch

> MM LOAD DATA with OVERWRITE doesn't use base_n directory concept
> 
>
> Key: HIVE-17970
> URL: https://issues.apache.org/jira/browse/HIVE-17970
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Sergey Shelukhin
>Priority: Major
>  Labels: mm-gap-2
> Attachments: HIVE-17970.01.patch, HIVE-17970.02.patch, 
> HIVE-17970.patch
>
>
> Judging by 
> {code:java}
> Hive.loadTable(Path loadPath, String tableName, LoadFileType loadFileType, 
> boolean isSrcLocal,
>   boolean isSkewedStoreAsSubdir, boolean isAcid, boolean 
> hasFollowingStatsTask,
>   Long txnId, int stmtId, boolean isMmTable)
> {code}
> LOAD DATA with OVERWRITE will delete all existing data then write new data 
> into the table.  This logic makes sense for non-acid tables but for Acid/MM 
> it should work like INSERT OVERWRITE statement and write new data to base_n/. 
> This way the lock manager can be used to either get an X lock for IOW and 
> thus block all readers or let it run with SemiShared and let readers continue 
> and make the system more concurrent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   >