[jira] [Commented] (HIVE-16346) inheritPerms should be conditional based on the target filesystem

2017-04-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956343#comment-15956343
 ] 

Hive QA commented on HIVE-16346:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12862004/HIVE-16346.4.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 10577 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[drop_with_concurrency]
 (batchId=234)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[escape_comments] 
(batchId=234)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr]
 (batchId=142)
org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=220)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4557/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4557/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4557/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12862004 - PreCommit-HIVE-Build

> inheritPerms should be conditional based on the target filesystem
> -
>
> Key: HIVE-16346
> URL: https://issues.apache.org/jira/browse/HIVE-16346
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
> Attachments: HIVE-16346.1.patch, HIVE-16346.2.patch, 
> HIVE-16346.3.patch, HIVE-16346.4.patch
>
>
> Right now, a lot of the logic in {{Hive.java}} attempts to set permissions of 
> different files that have been moved / copied. This is only triggered if 
> {{hive.warehouse.subdir.inherit.perms}} is set to true.
> However, on blobstores such as S3, there is no concept of file permissions so 
> these calls are unnecessary and can could a performance impact.
> One solution would be to set {{hive.warehouse.subdir.inherit.perms}} to 
> false, but this would be a global change that affects an entire HS2 instance. 
> So HDFS tables will no longer have permissions inheritance.
> A better solution would be to make the inheritance of permissions conditional 
> on the target filesystem.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16311) Improve the performance for FastHiveDecimalImpl.fastDivide

2017-04-04 Thread Colin Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956322#comment-15956322
 ] 

Colin Ma commented on HIVE-16311:
-

[~xuefuz], for the cases like 1234567.8901234560/9, here is the reason for 
these diffs: when create HiveDecimal by 
HiveDecimal.create("1234567.8901234560"), the trailing zero will be ignored by 
the following code:
https://github.com/apache/hive/blob/master/storage-api/src/java/org/apache/hadoop/hive/common/type/FastHiveDecimalImpl.java#L482
So 1234567.890123456 (*with scale 9*) is used for division, not 
1234567.8901234560(*with scale 10*).
Without this patch, the scale of result is always *HiveDecimal.MAX_SCALE*, and 
the result will be *137174.210013717333*, after reset the 
scale for output, the result will be *137174.210013717333*.
With this patch, the scale of result is calculated as *11 by Max(6, 9+1+1)*, 
and the result is *137174.21001371733*, after reset the scale for output, the 
result will be *137174.210013717330*.
I think it's ok to keep the trailing zero, and update the patch to check if all 
tests can pass.

> Improve the performance for FastHiveDecimalImpl.fastDivide
> --
>
> Key: HIVE-16311
> URL: https://issues.apache.org/jira/browse/HIVE-16311
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Colin Ma
>Assignee: Colin Ma
> Fix For: 3.0.0
>
> Attachments: HIVE-16311.001.patch, HIVE-16311.002.patch, 
> HIVE-16311.003.patch, HIVE-16311.004.patch, HIVE-16311.withTrailingZero.patch
>
>
> FastHiveDecimalImpl.fastDivide is poor performance when evaluate the 
> expression as 12345.67/123.45
> There are 2 points can be improved:
> 1. Don't always use HiveDecimal.MAX_SCALE as scale when do the 
> BigDecimal.divide.
> 2. Get the precision for BigInteger in a fast way if possible.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16380) removing global test dependency of jsonassert

2017-04-04 Thread anishek (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

anishek updated HIVE-16380:
---
Attachment: HIVE-16380.2.patch

adding back the required mockito-all global dependency

> removing global test dependency of jsonassert
> -
>
> Key: HIVE-16380
> URL: https://issues.apache.org/jira/browse/HIVE-16380
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: anishek
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HIVE-16380.1.patch, HIVE-16380.2.patch
>
>
> as part of commit done for HIVE-16219, there seems to additional changes in 
> the root level pom.xml, they should not be required. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16311) Improve the performance for FastHiveDecimalImpl.fastDivide

2017-04-04 Thread Colin Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Ma updated HIVE-16311:

Attachment: HIVE-16311.withTrailingZero.patch

> Improve the performance for FastHiveDecimalImpl.fastDivide
> --
>
> Key: HIVE-16311
> URL: https://issues.apache.org/jira/browse/HIVE-16311
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Colin Ma
>Assignee: Colin Ma
> Fix For: 3.0.0
>
> Attachments: HIVE-16311.001.patch, HIVE-16311.002.patch, 
> HIVE-16311.003.patch, HIVE-16311.004.patch, HIVE-16311.withTrailingZero.patch
>
>
> FastHiveDecimalImpl.fastDivide is poor performance when evaluate the 
> expression as 12345.67/123.45
> There are 2 points can be improved:
> 1. Don't always use HiveDecimal.MAX_SCALE as scale when do the 
> BigDecimal.divide.
> 2. Get the precision for BigInteger in a fast way if possible.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16346) inheritPerms should be conditional based on the target filesystem

2017-04-04 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-16346:

Attachment: HIVE-16346.4.patch

> inheritPerms should be conditional based on the target filesystem
> -
>
> Key: HIVE-16346
> URL: https://issues.apache.org/jira/browse/HIVE-16346
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
> Attachments: HIVE-16346.1.patch, HIVE-16346.2.patch, 
> HIVE-16346.3.patch, HIVE-16346.4.patch
>
>
> Right now, a lot of the logic in {{Hive.java}} attempts to set permissions of 
> different files that have been moved / copied. This is only triggered if 
> {{hive.warehouse.subdir.inherit.perms}} is set to true.
> However, on blobstores such as S3, there is no concept of file permissions so 
> these calls are unnecessary and can could a performance impact.
> One solution would be to set {{hive.warehouse.subdir.inherit.perms}} to 
> false, but this would be a global change that affects an entire HS2 instance. 
> So HDFS tables will no longer have permissions inheritance.
> A better solution would be to make the inheritance of permissions conditional 
> on the target filesystem.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16380) removing global test dependency of jsonassert

2017-04-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956299#comment-15956299
 ] 

Hive QA commented on HIVE-16380:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12862000/HIVE-16380.1.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4556/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4556/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4556/

Messages:
{noformat}
 This message was trimmed, see log for full details 
[ERROR] symbol:   method any(java.lang.Class)
[ERROR] location: class org.apache.hadoop.hive.io.TestHdfsUtils
[ERROR] 
/data/hiveptest/working/apache-github-source-source/shims/common/src/main/test/org/apache/hadoop/hive/io/TestHdfsUtils.java:[62,42]
 cannot find symbol
[ERROR] symbol:   method any(java.lang.Class)
[ERROR] location: class org.apache.hadoop.hive.io.TestHdfsUtils
[ERROR] 
/data/hiveptest/working/apache-github-source-source/shims/common/src/main/test/org/apache/hadoop/hive/io/TestHdfsUtils.java:[62,61]
 cannot find symbol
[ERROR] symbol:   method any(java.lang.Class)
[ERROR] location: class org.apache.hadoop.hive.io.TestHdfsUtils
[ERROR] 
/data/hiveptest/working/apache-github-source-source/shims/common/src/main/test/org/apache/hadoop/hive/io/TestHdfsUtils.java:[62,5]
 cannot find symbol
[ERROR] symbol:   method verify(org.apache.hadoop.fs.FileSystem)
[ERROR] location: class org.apache.hadoop.hive.io.TestHdfsUtils
[ERROR] 
/data/hiveptest/working/apache-github-source-source/shims/common/src/main/test/org/apache/hadoop/hive/io/TestHdfsUtils.java:[74,55]
 cannot find symbol
[ERROR] symbol:   method 
mock(java.lang.Class)
[ERROR] location: class org.apache.hadoop.hive.io.TestHdfsUtils
[ERROR] 
/data/hiveptest/working/apache-github-source-source/shims/common/src/main/test/org/apache/hadoop/hive/io/TestHdfsUtils.java:[75,35]
 cannot find symbol
[ERROR] symbol:   method mock(java.lang.Class)
[ERROR] location: class org.apache.hadoop.hive.io.TestHdfsUtils
[ERROR] 
/data/hiveptest/working/apache-github-source-source/shims/common/src/main/test/org/apache/hadoop/hive/io/TestHdfsUtils.java:[76,31]
 cannot find symbol
[ERROR] symbol:   method 
mock(java.lang.Class)
[ERROR] location: class org.apache.hadoop.hive.io.TestHdfsUtils
[ERROR] 
/data/hiveptest/working/apache-github-source-source/shims/common/src/main/test/org/apache/hadoop/hive/io/TestHdfsUtils.java:[77,25]
 cannot find symbol
[ERROR] symbol:   method mock(java.lang.Class)
[ERROR] location: class org.apache.hadoop.hive.io.TestHdfsUtils
[ERROR] 
/data/hiveptest/working/apache-github-source-source/shims/common/src/main/test/org/apache/hadoop/hive/io/TestHdfsUtils.java:[79,5]
 cannot find symbol
[ERROR] symbol:   method when(org.apache.hadoop.fs.permission.FsPermission)
[ERROR] location: class org.apache.hadoop.hive.io.TestHdfsUtils
[ERROR] 
/data/hiveptest/working/apache-github-source-source/shims/common/src/main/test/org/apache/hadoop/hive/io/TestHdfsUtils.java:[80,5]
 cannot find symbol
[ERROR] symbol:   method when(java.lang.String)
[ERROR] location: class org.apache.hadoop.hive.io.TestHdfsUtils
[ERROR] 
/data/hiveptest/working/apache-github-source-source/shims/common/src/main/test/org/apache/hadoop/hive/io/TestHdfsUtils.java:[81,5]
 cannot find symbol
[ERROR] symbol:   method when(org.apache.hadoop.fs.FileStatus)
[ERROR] location: class org.apache.hadoop.hive.io.TestHdfsUtils
[ERROR] 
/data/hiveptest/working/apache-github-source-source/shims/common/src/main/test/org/apache/hadoop/hive/io/TestHdfsUtils.java:[82,5]
 cannot find symbol
[ERROR] symbol:   method 
when(java.util.List)
[ERROR] location: class org.apache.hadoop.hive.io.TestHdfsUtils
[ERROR] 
/data/hiveptest/working/apache-github-source-source/shims/common/src/main/test/org/apache/hadoop/hive/io/TestHdfsUtils.java:[83,5]
 cannot find symbol
[ERROR] symbol:   method when(org.apache.hadoop.fs.permission.AclStatus)
[ERROR] location: class org.apache.hadoop.hive.io.TestHdfsUtils
[ERROR] 
/data/hiveptest/working/apache-github-source-source/shims/common/src/main/test/org/apache/hadoop/hive/io/TestHdfsUtils.java:[84,57]
 cannot find symbol
[ERROR] symbol:   method any(java.lang.Class)
[ERROR] location: class org.apache.hadoop.hive.io.TestHdfsUtils
[ERROR] 
/data/hiveptest/working/apache-github-source-source/shims/common/src/main/test/org/apache/hadoop/hive/io/TestHdfsUtils.java:[84,74]
 cannot find symbol
[ERROR] symbol:   method any(java.lang.Class)
[ERROR] location: class org.apache.hadoop.hive.io.TestHdfsUtils
[ERROR] 
/data/hiveptest/working/apache-github-source-source/shims/common/src/main/test/org/apache/hadoop/hive/io/TestHdfsUtils.java:[84,5]
 cannot find symbol
[ERROR] symbol:   method doThrow(java.lang.Class)
[ERROR] location: class 

[jira] [Commented] (HIVE-16267) Enable bootstrap function metadata to be loaded in repl load

2017-04-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956297#comment-15956297
 ] 

Hive QA commented on HIVE-16267:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12861999/HIVE-16267.1.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4555/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4555/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4555/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2017-04-05 04:48:34.152
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-4555/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2017-04-05 04:48:34.155
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 4e60ea3 HIVE-16297: Improving hive logging configuration 
variables (Vihang Karajgaonkar, reviewed by Peter Vary & Aihua Xu)
+ git clean -f -d
Removing ql/src/java/org/apache/hadoop/hive/ql/QueryLifeTimeHookRunner.java
Removing ql/src/java/org/apache/hadoop/hive/ql/hooks/HooksLoader.java
Removing 
ql/src/java/org/apache/hadoop/hive/ql/hooks/QueryLifeTimeHookWithParseHooks.java
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 4e60ea3 HIVE-16297: Improving hive logging configuration 
variables (Vihang Karajgaonkar, reviewed by Peter Vary & Aihua Xu)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2017-04-05 04:48:38.164
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: patch failed: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/TestReplicationScenarios.java:34
error: 
itests/hive-unit/src/test/java/org/apache/hadoop/hive/ql/TestReplicationScenarios.java:
 patch does not apply
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/parse/ReplicationSemanticAnalyzer.java:17
error: 
ql/src/java/org/apache/hadoop/hive/ql/parse/ReplicationSemanticAnalyzer.java: 
patch does not apply
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12861999 - PreCommit-HIVE-Build

> Enable bootstrap function metadata to be loaded in repl load
> 
>
> Key: HIVE-16267
> URL: https://issues.apache.org/jira/browse/HIVE-16267
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Reporter: anishek
>Assignee: anishek
> Fix For: 3.0.0
>
> Attachments: HIVE-16267.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16363) QueryLifeTimeHooks should catch parse exceptions

2017-04-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956296#comment-15956296
 ] 

Hive QA commented on HIVE-16363:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12861990/HIVE-16363.3.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 10579 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[drop_with_concurrency]
 (batchId=234)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[escape_comments] 
(batchId=234)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr]
 (batchId=142)
org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=220)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4554/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4554/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4554/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12861990 - PreCommit-HIVE-Build

> QueryLifeTimeHooks should catch parse exceptions
> 
>
> Key: HIVE-16363
> URL: https://issues.apache.org/jira/browse/HIVE-16363
> Project: Hive
>  Issue Type: Bug
>  Components: Hooks
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
> Attachments: HIVE-16363.1.patch, HIVE-16363.2.patch, 
> HIVE-16363.3.patch
>
>
> The {{QueryLifeTimeHook}} objects do not catch exceptions during query 
> parsing, only query compilation. New methods should be added to hook into pre 
> and post parsing of the query.
> This should be done in a backwards incompatible way so that current 
> implementations of this hook do not break.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16380) removing global test dependency of jsonassert

2017-04-04 Thread anishek (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

anishek updated HIVE-16380:
---
Attachment: HIVE-16380.1.patch

> removing global test dependency of jsonassert
> -
>
> Key: HIVE-16380
> URL: https://issues.apache.org/jira/browse/HIVE-16380
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: anishek
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HIVE-16380.1.patch
>
>
> as part of commit done for HIVE-16219, there seems to additional changes in 
> the root level pom.xml, they should not be required. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16380) removing global test dependency of jsonassert

2017-04-04 Thread anishek (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

anishek updated HIVE-16380:
---
Status: Patch Available  (was: Open)

> removing global test dependency of jsonassert
> -
>
> Key: HIVE-16380
> URL: https://issues.apache.org/jira/browse/HIVE-16380
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: anishek
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HIVE-16380.1.patch
>
>
> as part of commit done for HIVE-16219, there seems to additional changes in 
> the root level pom.xml, they should not be required. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16380) removing global test dependency of jsonassert

2017-04-04 Thread anishek (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

anishek updated HIVE-16380:
---
Summary: removing global test dependency of jsonassert  (was: removing 
global test dependency of json assert)

> removing global test dependency of jsonassert
> -
>
> Key: HIVE-16380
> URL: https://issues.apache.org/jira/browse/HIVE-16380
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: anishek
>Priority: Minor
> Fix For: 3.0.0
>
>
> as part of commit done for HIVE-16219, there seems to additional changes in 
> the root level pom.xml, they should not be required. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HIVE-16380) removing global test dependency of json assert

2017-04-04 Thread anishek (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

anishek reassigned HIVE-16380:
--


> removing global test dependency of json assert
> --
>
> Key: HIVE-16380
> URL: https://issues.apache.org/jira/browse/HIVE-16380
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: anishek
>Assignee: anishek
>Priority: Minor
> Fix For: 3.0.0
>
>
> as part of commit done for HIVE-16219, there seems to additional changes in 
> the root level pom.xml, they should not be required. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16267) Enable bootstrap function metadata to be loaded in repl load

2017-04-04 Thread anishek (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

anishek updated HIVE-16267:
---
Attachment: HIVE-16267.1.patch

submitting again hopefully the build system will pick it up

> Enable bootstrap function metadata to be loaded in repl load
> 
>
> Key: HIVE-16267
> URL: https://issues.apache.org/jira/browse/HIVE-16267
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Reporter: anishek
>Assignee: anishek
> Fix For: 3.0.0
>
> Attachments: HIVE-16267.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16267) Enable bootstrap function metadata to be loaded in repl load

2017-04-04 Thread anishek (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

anishek updated HIVE-16267:
---
Attachment: (was: HIVE-16267.1.patch)

> Enable bootstrap function metadata to be loaded in repl load
> 
>
> Key: HIVE-16267
> URL: https://issues.apache.org/jira/browse/HIVE-16267
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2
>Reporter: anishek
>Assignee: anishek
> Fix For: 3.0.0
>
> Attachments: HIVE-16267.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-12636) Ensure that all queries (with DbTxnManager) run in a transaction

2017-04-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956251#comment-15956251
 ] 

Hive QA commented on HIVE-12636:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12861986/HIVE-12636.04.patch

{color:green}SUCCESS:{color} +1 due to 10 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 32 failed/errored test(s), 10572 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[drop_with_concurrency]
 (batchId=234)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[escape_comments] 
(batchId=234)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dbtxnmgr_showlocks] 
(batchId=73)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=59)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr]
 (batchId=142)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into1] 
(batchId=87)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into2] 
(batchId=88)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into3] 
(batchId=87)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into4] 
(batchId=87)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[lockneg_try_drop_locked_db]
 (batchId=87)
org.apache.hadoop.hive.ql.TestTxnCommands.testDelete (batchId=277)
org.apache.hadoop.hive.ql.TestTxnCommands.testErrors (batchId=277)
org.apache.hadoop.hive.ql.TestTxnCommands.testExplicitRollback (batchId=277)
org.apache.hadoop.hive.ql.TestTxnCommands.testImplicitRollback (batchId=277)
org.apache.hadoop.hive.ql.TestTxnCommands.testMultipleDelete (batchId=277)
org.apache.hadoop.hive.ql.TestTxnCommands.testMultipleInserts (batchId=277)
org.apache.hadoop.hive.ql.TestTxnCommands.testReadMyOwnInsert (batchId=277)
org.apache.hadoop.hive.ql.TestTxnCommands.testSimpleAcidInsert (batchId=277)
org.apache.hadoop.hive.ql.TestTxnCommands.testTimeOutReaper (batchId=277)
org.apache.hadoop.hive.ql.TestTxnCommands.testUpdateDeleteOfInserts 
(batchId=277)
org.apache.hadoop.hive.ql.TestTxnCommands.testUpdateOfInserts (batchId=277)
org.apache.hadoop.hive.ql.TestTxnCommands2.testSimpleRead (batchId=265)
org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdate.testSimpleRead 
(batchId=275)
org.apache.hadoop.hive.ql.TestTxnCommands2WithSplitUpdateAndVectorization.testSimpleRead
 (batchId=272)
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testMerge3Way01 
(batchId=276)
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testMerge3Way02 
(batchId=276)
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testMergePartitioned02 
(batchId=276)
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testMergeUnpartitioned02 
(batchId=276)
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testWriteSetTracking11 
(batchId=276)
org.apache.hadoop.hive.ql.lockmgr.TestDbTxnManager2.testWriteSetTracking5 
(batchId=276)
org.apache.hive.hcatalog.mapreduce.TestHCatMultiOutputFormat.testOutputFormat 
(batchId=184)
org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=220)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4553/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4553/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4553/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 32 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12861986 - PreCommit-HIVE-Build

> Ensure that all queries (with DbTxnManager) run in a transaction
> 
>
> Key: HIVE-12636
> URL: https://issues.apache.org/jira/browse/HIVE-12636
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-12636.01.patch, HIVE-12636.02.patch, 
> HIVE-12636.03.patch, HIVE-12636.04.patch
>
>
> Assuming Hive is using DbTxnManager
> Currently (as of this writing only auto commit mode is supported), only 
> queries that write to an Acid table start a transaction.
> Read-only queries don't open a txn but still acquire locks.
> This makes internal structures confusing/odd.
> The are constantly 2 code paths to deal with which is inconvenient and error 
> prone.
> Also, a txn id is convenient "handle" for all locks/resources within a txn.
> Doing thing would mean the 

[jira] [Commented] (HIVE-15931) JDBC: Improve logging when using ZooKeeper and anonymize passwords before logging

2017-04-04 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15931?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956231#comment-15956231
 ] 

Vaibhav Gumashta commented on HIVE-15931:
-

[~pvary] Sorry, was occupied with other stuff. Will post a follow up tomorrow 
based on your latest rb feedback. Thanks for the reviews so far.

> JDBC: Improve logging when using ZooKeeper and anonymize passwords before 
> logging
> -
>
> Key: HIVE-15931
> URL: https://issues.apache.org/jira/browse/HIVE-15931
> Project: Hive
>  Issue Type: Bug
>  Components: JDBC
>Affects Versions: 2.2.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-15931.1.patch, HIVE-15931.2.patch, 
> HIVE-15931.3.patch, HIVE-15931.4.patch, HIVE-15931.5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-15986) Support "is [not] distinct from"

2017-04-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956213#comment-15956213
 ] 

Hive QA commented on HIVE-15986:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12861975/HIVE-15986.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 10578 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[drop_with_concurrency]
 (batchId=234)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[escape_comments] 
(batchId=234)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[show_functions] 
(batchId=68)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[udf_equal] (batchId=52)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr]
 (batchId=142)
org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=220)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4552/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4552/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4552/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12861975 - PreCommit-HIVE-Build

> Support "is [not] distinct from"
> 
>
> Key: HIVE-15986
> URL: https://issues.apache.org/jira/browse/HIVE-15986
> Project: Hive
>  Issue Type: Sub-task
>  Components: SQL
>Reporter: Carter Shanklin
>Assignee: Vineet Garg
> Attachments: HIVE-15986.1.patch, HIVE-15986.2.patch
>
>
> Support standard "is [not] distinct from" syntax. For example this gives a 
> standard way to do a comparison to null safe join: select * from t1 join t2 
> on t1.x is not distinct from t2.y. SQL standard reference Section 8.15



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16285) Servlet for dynamically configuring log levels

2017-04-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956173#comment-15956173
 ] 

Hive QA commented on HIVE-16285:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12861969/HIVE-16285.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 10577 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[drop_with_concurrency]
 (batchId=234)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[escape_comments] 
(batchId=234)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr]
 (batchId=142)
org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=220)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4551/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4551/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4551/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12861969 - PreCommit-HIVE-Build

> Servlet for dynamically configuring log levels
> --
>
> Key: HIVE-16285
> URL: https://issues.apache.org/jira/browse/HIVE-16285
> Project: Hive
>  Issue Type: Improvement
>  Components: Logging
>Affects Versions: 2.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-16285.1.patch, HIVE-16285.2.patch, 
> HIVE-16285.3.patch
>
>
> Many long running services like HS2, LLAP etc. will benefit from having an 
> endpoint to dynamically change log levels for various loggers. This will help 
> greatly with debuggability without requiring a restart of the service. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16363) QueryLifeTimeHooks should catch parse exceptions

2017-04-04 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-16363:

Attachment: HIVE-16363.3.patch

Thanks for the review [~vihangk1]. Address your comments and updated the RB. 
[~spena] can you take a look too?

Latest patch adds unit tests.

> QueryLifeTimeHooks should catch parse exceptions
> 
>
> Key: HIVE-16363
> URL: https://issues.apache.org/jira/browse/HIVE-16363
> Project: Hive
>  Issue Type: Bug
>  Components: Hooks
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
> Attachments: HIVE-16363.1.patch, HIVE-16363.2.patch, 
> HIVE-16363.3.patch
>
>
> The {{QueryLifeTimeHook}} objects do not catch exceptions during query 
> parsing, only query compilation. New methods should be added to hook into pre 
> and post parsing of the query.
> This should be done in a backwards incompatible way so that current 
> implementations of this hook do not break.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-12636) Ensure that all queries (with DbTxnManager) run in a transaction

2017-04-04 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-12636:
--
Attachment: HIVE-12636.04.patch

> Ensure that all queries (with DbTxnManager) run in a transaction
> 
>
> Key: HIVE-12636
> URL: https://issues.apache.org/jira/browse/HIVE-12636
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-12636.01.patch, HIVE-12636.02.patch, 
> HIVE-12636.03.patch, HIVE-12636.04.patch
>
>
> Assuming Hive is using DbTxnManager
> Currently (as of this writing only auto commit mode is supported), only 
> queries that write to an Acid table start a transaction.
> Read-only queries don't open a txn but still acquire locks.
> This makes internal structures confusing/odd.
> The are constantly 2 code paths to deal with which is inconvenient and error 
> prone.
> Also, a txn id is convenient "handle" for all locks/resources within a txn.
> Doing thing would mean the client no longer needs to track locks that it 
> acquired.  This enables further improvements to metastore side of Acid.
> # add metastore call to openTxn() and acquireLocks() in a single call.  this 
> it to make sure perf doesn't degrade for read-only query.  (Would also be 
> useful for auto commit write queries)
> # Should RO queries generate txn ids from the same sequence?  (they could for 
> example use negative values of a different sequence).  Txnid is part of the 
> delta/base file name.  Currently it's 7 digits.  If we use the same sequence, 
> we'll exceed 7 digits faster. (possible upgrade issue).  On the other hand 
> there is value in being able to pick txn id and commit timestamp out of the 
> same logical sequence.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16345) BeeLineDriver should be able to run qtest files which are using default database tables

2017-04-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956135#comment-15956135
 ] 

Hive QA commented on HIVE-16345:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12861953/HIVE-16345.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 10586 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=220)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4550/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4550/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4550/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12861953 - PreCommit-HIVE-Build

> BeeLineDriver should be able to run qtest files which are using default 
> database tables
> ---
>
> Key: HIVE-16345
> URL: https://issues.apache.org/jira/browse/HIVE-16345
> Project: Hive
>  Issue Type: Improvement
>  Components: Testing Infrastructure
>Reporter: Peter Vary
>Assignee: Peter Vary
> Attachments: HIVE-16345.patch
>
>
> It would be good to be able to run the default clientpositive tests. 
> Currently we can not do that, since we start with a specific database. We 
> should filter the query input and replace the table references



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16379) Can not compute column stats when partition column is decimal

2017-04-04 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956127#comment-15956127
 ] 

Pengcheng Xiong commented on HIVE-16379:


IMHO, we should remove hive.typecheck.on.insert

> Can not compute column stats when partition column is decimal 
> --
>
> Key: HIVE-16379
> URL: https://issues.apache.org/jira/browse/HIVE-16379
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>
> to repo, run 
> {code}
> et hive.compute.query.using.stats=false;
> set hive.stats.column.autogather=false;
> drop table if exists partcoltypeothers;
> create table partcoltypeothers (key int, value string) partitioned by 
> (decpart decimal(6,2), datepart date);
> set hive.typecheck.on.insert=false;
> insert into partcoltypeothers partition (decpart = 1000.01BD, datepart = date 
> '2015-4-13') select key, value from src limit 10;
> show partitions partcoltypeothers;
> analyze table partcoltypeothers partition (decpart = 1000.01BD, datepart = 
> date '2015-4-13') compute statistics for columns;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-10307) Support to use number literals in partition column

2017-04-04 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956125#comment-15956125
 ] 

Pengcheng Xiong commented on HIVE-10307:


see HIVE-16379

> Support to use number literals in partition column
> --
>
> Key: HIVE-10307
> URL: https://issues.apache.org/jira/browse/HIVE-10307
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 1.0.0
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Fix For: 1.2.0
>
> Attachments: HIVE-10307.1.patch, HIVE-10307.2.patch, 
> HIVE-10307.3.patch, HIVE-10307.4.patch, HIVE-10307.5.patch, 
> HIVE-10307.6.patch, HIVE-10307.patch
>
>
> Data types like TinyInt, SmallInt, BigInt or Decimal can be expressed as 
> literals with postfix like Y, S, L, or BD appended to the number. These 
> literals work in most Hive queries, but do not when they are used as 
> partition column value. For a partitioned table like:
> create table partcoltypenum (key int, value string) partitioned by (tint 
> tinyint, sint smallint, bint bigint);
> insert into partcoltypenum partition (tint=100Y, sint=1S, 
> bint=1000L) select key, value from src limit 30;
> Queries like select, describe and drop partition do not work. For an example
> select * from partcoltypenum where tint=100Y and sint=1S and 
> bint=1000L;
> does not return any rows.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HIVE-16379) Can not compute column stats when partition column is decimal

2017-04-04 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong reassigned HIVE-16379:
--


> Can not compute column stats when partition column is decimal 
> --
>
> Key: HIVE-16379
> URL: https://issues.apache.org/jira/browse/HIVE-16379
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>
> to repo, run 
> {code}
> et hive.compute.query.using.stats=false;
> set hive.stats.column.autogather=false;
> drop table if exists partcoltypeothers;
> create table partcoltypeothers (key int, value string) partitioned by 
> (decpart decimal(6,2), datepart date);
> set hive.typecheck.on.insert=false;
> insert into partcoltypeothers partition (decpart = 1000.01BD, datepart = date 
> '2015-4-13') select key, value from src limit 10;
> show partitions partcoltypeothers;
> analyze table partcoltypeothers partition (decpart = 1000.01BD, datepart = 
> date '2015-4-13') compute statistics for columns;
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-10307) Support to use number literals in partition column

2017-04-04 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-10307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956122#comment-15956122
 ] 

Pengcheng Xiong commented on HIVE-10307:


[~ctang.ma] and [~jxiang], Could u give me a case where it is required that we 
should set this configuration to false? IMHO, we should remove this 
configuration because it should always be set to true.

> Support to use number literals in partition column
> --
>
> Key: HIVE-10307
> URL: https://issues.apache.org/jira/browse/HIVE-10307
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Processor
>Affects Versions: 1.0.0
>Reporter: Chaoyu Tang
>Assignee: Chaoyu Tang
> Fix For: 1.2.0
>
> Attachments: HIVE-10307.1.patch, HIVE-10307.2.patch, 
> HIVE-10307.3.patch, HIVE-10307.4.patch, HIVE-10307.5.patch, 
> HIVE-10307.6.patch, HIVE-10307.patch
>
>
> Data types like TinyInt, SmallInt, BigInt or Decimal can be expressed as 
> literals with postfix like Y, S, L, or BD appended to the number. These 
> literals work in most Hive queries, but do not when they are used as 
> partition column value. For a partitioned table like:
> create table partcoltypenum (key int, value string) partitioned by (tint 
> tinyint, sint smallint, bint bigint);
> insert into partcoltypenum partition (tint=100Y, sint=1S, 
> bint=1000L) select key, value from src limit 30;
> Queries like select, describe and drop partition do not work. For an example
> select * from partcoltypenum where tint=100Y and sint=1S and 
> bint=1000L;
> does not return any rows.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16372) Enable DDL statement for non-native tables (add/remove table properties)

2017-04-04 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956112#comment-15956112
 ] 

Pengcheng Xiong commented on HIVE-16372:


OK. i will resubmit the patch.

> Enable DDL statement for non-native tables (add/remove table properties)
> 
>
> Key: HIVE-16372
> URL: https://issues.apache.org/jira/browse/HIVE-16372
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-16372.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16372) Enable DDL statement for non-native tables (add/remove table properties)

2017-04-04 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956110#comment-15956110
 ] 

Thejas M Nair commented on HIVE-16372:
--

In AlterTableDesc.java
Using guava immutable list would be better. ie -
 public static final List nonNativeTableAllowedTypes = 
ImmutableList.of(ADDPROPS, DROPPROPS); 

In ErrorMsg.java
 Arrays.toString(AlterTableTypes.nonNativeTableAllowedTypes.toArray()) can be 
replaced with simply -
AlterTableTypes.nonNativeTableAllowedTypes
(its same format)


> Enable DDL statement for non-native tables (add/remove table properties)
> 
>
> Key: HIVE-16372
> URL: https://issues.apache.org/jira/browse/HIVE-16372
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-16372.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16296) use LLAP executor count to configure reducer auto-parallelism

2017-04-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956084#comment-15956084
 ] 

Hive QA commented on HIVE-16296:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12861950/HIVE-16296.08.patch

{color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 10577 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[drop_with_concurrency]
 (batchId=234)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[escape_comments] 
(batchId=234)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_binary_join_groupby]
 (batchId=76)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr]
 (batchId=142)
org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=220)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4549/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4549/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4549/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 5 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12861950 - PreCommit-HIVE-Build

> use LLAP executor count to configure reducer auto-parallelism
> -
>
> Key: HIVE-16296
> URL: https://issues.apache.org/jira/browse/HIVE-16296
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-16296.01.patch, HIVE-16296.03.patch, 
> HIVE-16296.04.patch, HIVE-16296.05.patch, HIVE-16296.06.patch, 
> HIVE-16296.07.patch, HIVE-16296.08.patch, HIVE-16296.2.patch, HIVE-16296.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16335) Beeline user HS2 connection file should use /etc/hive/conf instead of /etc/conf/hive

2017-04-04 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-16335:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks Vihang.

> Beeline user HS2 connection file should use /etc/hive/conf instead of 
> /etc/conf/hive
> 
>
> Key: HIVE-16335
> URL: https://issues.apache.org/jira/browse/HIVE-16335
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 2.1.1, 2.2.0
>Reporter: Tim Harsch
>Assignee: Vihang Karajgaonkar
> Fix For: 3.0.0
>
> Attachments: HIVE-16335.01.patch
>
>
> https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients
> says:  
> BeeLine looks for it in ${HIVE_CONF_DIR} location and /etc/conf/hive in that 
> order.
> shouldn't it be?
> BeeLine looks for it in ${HIVE_CONF_DIR} location and /etc/hive/conf in that 
> order?
> Most distributions I've used have a /etc/hive/conf dir.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16297) Improving hive logging configuration variables

2017-04-04 Thread Aihua Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu updated HIVE-16297:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks Vihang.

> Improving hive logging configuration variables
> --
>
> Key: HIVE-16297
> URL: https://issues.apache.org/jira/browse/HIVE-16297
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vihang Karajgaonkar
>Assignee: Vihang Karajgaonkar
> Fix For: 3.0.0
>
> Attachments: HIVE-16297.01.patch, HIVE-16297.02.patch, 
> HIVE-16297.03.patch
>
>
> There are a few places in the source-code where we use 
> {{Configuration.dumpConfiguration()}}. We should preprocess the configuration 
> properties before dumping it in the logs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16378) Derby throws java.lang.StackOverflowError when it tries to get column stats from a table with thousands columns

2017-04-04 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-16378:
---
Description: 
to repo, set hive.stats.column.autogather=true, and run orc_wide_table.q

stack trace
{code}
Caused by: java.sql.SQLException: Java exception: ': 
java.lang.StackOverflowError'.
at 
org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown 
Source) ~[derby-10.10.2.0.jar:?]
at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source) 
~[derby-10.10.2.0.jar:?]
at org.apache.derby.impl.jdbc.Util.javaException(Unknown Source) 
~[derby-10.10.2.0.jar:?]
at 
org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown 
Source) ~[derby-10.10.2.0.jar:?]
at 
org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown 
Source) ~[derby-10.10.2.0.jar:?]
at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown 
Source) ~[derby-10.10.2.0.jar:?]
at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown 
Source) ~[derby-10.10.2.0.jar:?]
at org.apache.derby.impl.jdbc.EmbedPreparedStatement.(Unknown 
Source) ~[derby-10.10.2.0.jar:?]
at org.apache.derby.impl.jdbc.EmbedPreparedStatement20.(Unknown 
Source) ~[derby-10.10.2.0.jar:?]
at org.apache.derby.impl.jdbc.EmbedPreparedStatement30.(Unknown 
Source) ~[derby-10.10.2.0.jar:?]
at org.apache.derby.impl.jdbc.EmbedPreparedStatement40.(Unknown 
Source) ~[derby-10.10.2.0.jar:?]
at org.apache.derby.impl.jdbc.EmbedPreparedStatement42.(Unknown 
Source) ~[derby-10.10.2.0.jar:?]
at org.apache.derby.jdbc.Driver42.newEmbedPreparedStatement(Unknown 
Source) ~[derby-10.10.2.0.jar:?]
at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown 
Source) ~[derby-10.10.2.0.jar:?]
at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown 
Source) ~[derby-10.10.2.0.jar:?]
at 
com.jolbox.bonecp.ConnectionHandle.prepareStatement(ConnectionHandle.java:1193) 
~[bonecp-0.8.0.RELEASE.jar:?]
at 
org.datanucleus.store.rdbms.SQLController.getStatementForQuery(SQLController.java:345)
 ~[datanucleus-rdbms-4.1.19.jar:?]
at 
org.datanucleus.store.rdbms.query.RDBMSQueryUtils.getPreparedStatementForQuery(RDBMSQueryUtils.java:211)
 ~[datanucleus-rdbms-4.1.19.jar:?]
at 
org.datanucleus.store.rdbms.query.JDOQLQuery.performExecute(JDOQLQuery.java:609)
 ~[datanucleus-rdbms-4.1.19.jar:?]
at org.datanucleus.store.query.Query.executeQuery(Query.java:1855) 
~[datanucleus-core-4.1.17.jar:?]
at org.datanucleus.store.query.Query.executeWithArray(Query.java:1744) 
~[datanucleus-core-4.1.17.jar:?]
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:368) 
~[datanucleus-api-jdo-4.2.4.jar:?]
... 83 more
Caused by: org.apache.derby.impl.jdbc.EmbedSQLException: Java exception: ': 
java.lang.StackOverflowError'.
at 
org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source) 
~[derby-10.10.2.0.jar:?]
at 
org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
 Source) ~[derby-10.10.2.0.jar:?]
at 
org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown 
Source) ~[derby-10.10.2.0.jar:?]
at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown Source) 
~[derby-10.10.2.0.jar:?]
at org.apache.derby.impl.jdbc.Util.javaException(Unknown Source) 
~[derby-10.10.2.0.jar:?]
at 
org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown 
Source) ~[derby-10.10.2.0.jar:?]
{code}

  was:to repo, set hive.stats.column.autogather=true, and run orc_wide_table.q


> Derby throws java.lang.StackOverflowError when it tries to get column stats 
> from a table with thousands columns
> ---
>
> Key: HIVE-16378
> URL: https://issues.apache.org/jira/browse/HIVE-16378
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>
> to repo, set hive.stats.column.autogather=true, and run orc_wide_table.q
> stack trace
> {code}
> Caused by: java.sql.SQLException: Java exception: ': 
> java.lang.StackOverflowError'.
> at 
> org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown 
> Source) ~[derby-10.10.2.0.jar:?]
> at org.apache.derby.impl.jdbc.Util.newEmbedSQLException(Unknown 
> Source) ~[derby-10.10.2.0.jar:?]
> at org.apache.derby.impl.jdbc.Util.javaException(Unknown Source) 
> ~[derby-10.10.2.0.jar:?]
> at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown 
> Source) ~[derby-10.10.2.0.jar:?]
> at 
> 

[jira] [Assigned] (HIVE-16378) Derby throws java.lang.StackOverflowError when it tries to get column stats from a table with thousands columns

2017-04-04 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong reassigned HIVE-16378:
--


> Derby throws java.lang.StackOverflowError when it tries to get column stats 
> from a table with thousands columns
> ---
>
> Key: HIVE-16378
> URL: https://issues.apache.org/jira/browse/HIVE-16378
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>
> to repo, set hive.stats.column.autogather=true, and run orc_wide_table.q



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16372) Enable DDL statement for non-native tables (add/remove table properties)

2017-04-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956035#comment-15956035
 ] 

Hive QA commented on HIVE-16372:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12861945/HIVE-16372.01.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 10579 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[drop_with_concurrency]
 (batchId=235)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[escape_comments] 
(batchId=235)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr]
 (batchId=143)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_non_native]
 (batchId=87)
org.apache.hadoop.hive.llap.security.TestLlapSignerImpl.testSigning 
(batchId=285)
org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=221)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4548/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4548/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4548/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12861945 - PreCommit-HIVE-Build

> Enable DDL statement for non-native tables (add/remove table properties)
> 
>
> Key: HIVE-16372
> URL: https://issues.apache.org/jira/browse/HIVE-16372
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-16372.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-15986) Support "is [not] distinct from"

2017-04-04 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-15986:
---
Status: Patch Available  (was: Open)

Since {{is distinct from}} is equivalent to {{NOT <=>}} we decided to use 
existing UDFs instead of adding brand new ones. Latest patch contain this 
change. 
Ideally the grammar should be rewriting this expression into {{<=>}} but I 
couldn't figure out how to rewrite it, so instead now during AST to Expr 
conversion ast nodes are replaced with null safe equality's UDFs

> Support "is [not] distinct from"
> 
>
> Key: HIVE-15986
> URL: https://issues.apache.org/jira/browse/HIVE-15986
> Project: Hive
>  Issue Type: Sub-task
>  Components: SQL
>Reporter: Carter Shanklin
>Assignee: Vineet Garg
> Attachments: HIVE-15986.1.patch, HIVE-15986.2.patch
>
>
> Support standard "is [not] distinct from" syntax. For example this gives a 
> standard way to do a comparison to null safe join: select * from t1 join t2 
> on t1.x is not distinct from t2.y. SQL standard reference Section 8.15



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-15986) Support "is [not] distinct from"

2017-04-04 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-15986:
---
Attachment: HIVE-15986.2.patch

> Support "is [not] distinct from"
> 
>
> Key: HIVE-15986
> URL: https://issues.apache.org/jira/browse/HIVE-15986
> Project: Hive
>  Issue Type: Sub-task
>  Components: SQL
>Reporter: Carter Shanklin
>Assignee: Vineet Garg
> Attachments: HIVE-15986.1.patch, HIVE-15986.2.patch
>
>
> Support standard "is [not] distinct from" syntax. For example this gives a 
> standard way to do a comparison to null safe join: select * from t1 join t2 
> on t1.x is not distinct from t2.y. SQL standard reference Section 8.15



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-15986) Support "is [not] distinct from"

2017-04-04 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-15986:
---
Status: Open  (was: Patch Available)

> Support "is [not] distinct from"
> 
>
> Key: HIVE-15986
> URL: https://issues.apache.org/jira/browse/HIVE-15986
> Project: Hive
>  Issue Type: Sub-task
>  Components: SQL
>Reporter: Carter Shanklin
>Assignee: Vineet Garg
> Attachments: HIVE-15986.1.patch, HIVE-15986.2.patch
>
>
> Support standard "is [not] distinct from" syntax. For example this gives a 
> standard way to do a comparison to null safe join: select * from t1 join t2 
> on t1.x is not distinct from t2.y. SQL standard reference Section 8.15



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-11444) ACID Compactor should generate stats/alerts

2017-04-04 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15956006#comment-15956006
 ] 

Eugene Koifman commented on HIVE-11444:
---

More generally, raise alert 
1. if there are too many open txns
2. if there are too many aborted txns - most likely a misconfigured streaming 
ingest client.  Need to include client info in the alert.
3. if there are a lot of entries in TXN_COMPONENTS  - means compactor is not 
keeping up

In extreme cases both can cause the amount of metadata to slow down the 
metastore operations (TxnHandler/CompactionTxnHandler) a use very large amounts 
of RAM (ValidTxnList)


> ACID Compactor should generate stats/alerts
> ---
>
> Key: HIVE-11444
> URL: https://issues.apache.org/jira/browse/HIVE-11444
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>
> Compaction should generate stats about number of files it reads, min/max/avg 
> size etc.  It should also generate alerts if it looks like the system is not 
> configured correctly.
> For example, if there are lots of delta files with very small files, it's a 
> good sign that Streaming API is configured with batches that are too small.
> Simplest idea is to add another periodic task to AcidHouseKeeperService to
> //periodically do select count(*), min(txnid),max(txnid), type from 
> txns group by type.
> //1. dump that to log file at info
> //2. could also keep counts for last 10min, hour, 6 hours, 24 hours, 
> etc
> //2.2 if a large increase is detected - issue alert (at least to the 
> log for now) at warn/error
> Should also alert if there is ACID activity but no compactions running.
> One way to do this is to add logic to TxnHandler to periodically check 
> contents of COMPACTION_QUEUE table and keep  a simple histogram of 
> compactions over last few hours.
> Similarly can run a periodic check of transactions started (or 
> committed/aborted) and keep a simple histogram.  Then the 2 can be used to 
> detect that there is ACID write activity but no compaction activity.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16377) Clean up the code now that all locks belong to a transaction

2017-04-04 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-16377:
--
Description: 
split this form HIVE-12636 to make back porting (if needed)/reviews easier

TxnHandler, DbLockManager, DbTxnManager, etc

  was:split this form HIVE-12636 to make back porting (if needed)/reviews easier


> Clean up the code now that all locks belong to a transaction
> 
>
> Key: HIVE-16377
> URL: https://issues.apache.org/jira/browse/HIVE-16377
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>
> split this form HIVE-12636 to make back porting (if needed)/reviews easier
> TxnHandler, DbLockManager, DbTxnManager, etc



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16285) Servlet for dynamically configuring log levels

2017-04-04 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955994#comment-15955994
 ] 

Prasanth Jayachandran commented on HIVE-16285:
--

[~sseth] could you please take a look?

> Servlet for dynamically configuring log levels
> --
>
> Key: HIVE-16285
> URL: https://issues.apache.org/jira/browse/HIVE-16285
> Project: Hive
>  Issue Type: Improvement
>  Components: Logging
>Affects Versions: 2.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-16285.1.patch, HIVE-16285.2.patch, 
> HIVE-16285.3.patch
>
>
> Many long running services like HS2, LLAP etc. will benefit from having an 
> endpoint to dynamically change log levels for various loggers. This will help 
> greatly with debuggability without requiring a restart of the service. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16285) Servlet for dynamically configuring log levels

2017-04-04 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-16285:
-
Attachment: HIVE-16285.3.patch

Rebased patch

> Servlet for dynamically configuring log levels
> --
>
> Key: HIVE-16285
> URL: https://issues.apache.org/jira/browse/HIVE-16285
> Project: Hive
>  Issue Type: Improvement
>  Components: Logging
>Affects Versions: 2.2.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-16285.1.patch, HIVE-16285.2.patch, 
> HIVE-16285.3.patch
>
>
> Many long running services like HS2, LLAP etc. will benefit from having an 
> endpoint to dynamically change log levels for various loggers. This will help 
> greatly with debuggability without requiring a restart of the service. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HIVE-16377) Clean up the code now that all locks belong to a transaction

2017-04-04 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman reassigned HIVE-16377:
-


> Clean up the code now that all locks belong to a transaction
> 
>
> Key: HIVE-16377
> URL: https://issues.apache.org/jira/browse/HIVE-16377
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>
> split this form HIVE-12636 to make back porting (if needed)/reviews easier



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-12636) Ensure that all queries (with DbTxnManager) run in a transaction

2017-04-04 Thread Eugene Koifman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955983#comment-15955983
 ] 

Eugene Koifman commented on HIVE-12636:
---

Should look into HIVE-16376 after HIVE-12636 is done

> Ensure that all queries (with DbTxnManager) run in a transaction
> 
>
> Key: HIVE-12636
> URL: https://issues.apache.org/jira/browse/HIVE-12636
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Affects Versions: 1.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
>Priority: Critical
> Attachments: HIVE-12636.01.patch, HIVE-12636.02.patch, 
> HIVE-12636.03.patch
>
>
> Assuming Hive is using DbTxnManager
> Currently (as of this writing only auto commit mode is supported), only 
> queries that write to an Acid table start a transaction.
> Read-only queries don't open a txn but still acquire locks.
> This makes internal structures confusing/odd.
> The are constantly 2 code paths to deal with which is inconvenient and error 
> prone.
> Also, a txn id is convenient "handle" for all locks/resources within a txn.
> Doing thing would mean the client no longer needs to track locks that it 
> acquired.  This enables further improvements to metastore side of Acid.
> # add metastore call to openTxn() and acquireLocks() in a single call.  this 
> it to make sure perf doesn't degrade for read-only query.  (Would also be 
> useful for auto commit write queries)
> # Should RO queries generate txn ids from the same sequence?  (they could for 
> example use negative values of a different sequence).  Txnid is part of the 
> delta/base file name.  Currently it's 7 digits.  If we use the same sequence, 
> we'll exceed 7 digits faster. (possible upgrade issue).  On the other hand 
> there is value in being able to pick txn id and commit timestamp out of the 
> same logical sequence.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16307) add IO memory usage report to LLAP UI

2017-04-04 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955965#comment-15955965
 ] 

Prasanth Jayachandran commented on HIVE-16307:
--

+1

Not sure if it will be useful for the output to be json formatted so that 
UI/tools can consume the output.

> add IO memory usage report to LLAP UI
> -
>
> Key: HIVE-16307
> URL: https://issues.apache.org/jira/browse/HIVE-16307
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-16307.01.patch, HIVE-16307.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16346) inheritPerms should be conditional based on the target filesystem

2017-04-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955961#comment-15955961
 ] 

Hive QA commented on HIVE-16346:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12861943/HIVE-16346.3.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 6 failed/errored test(s), 10576 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[drop_with_concurrency]
 (batchId=234)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[escape_comments] 
(batchId=234)
org.apache.hadoop.hive.common.TestBlobStorageUtils.testValidAndInvalidFileSystems
 (batchId=242)
org.apache.hadoop.hive.io.TestHdfsUtils.testSetFullFileStatusFailInheritAcls 
(batchId=242)
org.apache.hadoop.hive.io.TestHdfsUtils.testSetFullFileStatusFailInheritGroup 
(batchId=242)
org.apache.hadoop.hive.io.TestHdfsUtils.testSetFullFileStatusFailInheritPerms 
(batchId=242)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4547/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4547/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4547/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 6 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12861943 - PreCommit-HIVE-Build

> inheritPerms should be conditional based on the target filesystem
> -
>
> Key: HIVE-16346
> URL: https://issues.apache.org/jira/browse/HIVE-16346
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
> Attachments: HIVE-16346.1.patch, HIVE-16346.2.patch, 
> HIVE-16346.3.patch
>
>
> Right now, a lot of the logic in {{Hive.java}} attempts to set permissions of 
> different files that have been moved / copied. This is only triggered if 
> {{hive.warehouse.subdir.inherit.perms}} is set to true.
> However, on blobstores such as S3, there is no concept of file permissions so 
> these calls are unnecessary and can could a performance impact.
> One solution would be to set {{hive.warehouse.subdir.inherit.perms}} to 
> false, but this would be a global change that affects an entire HS2 instance. 
> So HDFS tables will no longer have permissions inheritance.
> A better solution would be to make the inheritance of permissions conditional 
> on the target filesystem.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16293) Column pruner should continue to work when SEL has more than 1 child

2017-04-04 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-16293:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Column pruner should continue to work when SEL has more than 1 child
> 
>
> Key: HIVE-16293
> URL: https://issues.apache.org/jira/browse/HIVE-16293
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>  Labels: 2.3.0
> Fix For: 2.3.0
>
> Attachments: HIVE-16293.01.patch, HIVE-16293.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16293) Column pruner should continue to work when SEL has more than 1 child

2017-04-04 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-16293:
---
Labels: 2.3.0  (was: )

> Column pruner should continue to work when SEL has more than 1 child
> 
>
> Key: HIVE-16293
> URL: https://issues.apache.org/jira/browse/HIVE-16293
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>  Labels: 2.3.0
> Fix For: 2.3.0
>
> Attachments: HIVE-16293.01.patch, HIVE-16293.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16293) Column pruner should continue to work when SEL has more than 1 child

2017-04-04 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-16293:
---
Fix Version/s: 2.3.0

> Column pruner should continue to work when SEL has more than 1 child
> 
>
> Key: HIVE-16293
> URL: https://issues.apache.org/jira/browse/HIVE-16293
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>  Labels: 2.3.0
> Fix For: 2.3.0
>
> Attachments: HIVE-16293.01.patch, HIVE-16293.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16369) Vectorization: Support PTF (Part 1: No Custom Window Framing -- Default Only)

2017-04-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955863#comment-15955863
 ] 

Hive QA commented on HIVE-16369:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12861864/HIVE-16369.01.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 70 failed/errored test(s), 10577 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[drop_with_concurrency]
 (batchId=234)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[escape_comments] 
(batchId=234)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[correlationoptimizer12] 
(batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ctas_colname] 
(batchId=55)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[distinct_windowing] 
(batchId=11)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[distinct_windowing_no_cbo]
 (batchId=61)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby_grouping_window] 
(batchId=30)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[pcs] (batchId=46)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_windowing1] 
(batchId=42)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ptfgroupbyjoin] 
(batchId=79)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[quotedid_basic] 
(batchId=57)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[semijoin2] (batchId=51)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[semijoin4] (batchId=80)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[semijoin5] (batchId=15)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[subquery_in_having] 
(batchId=54)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[subquery_unqualcolumnrefs]
 (batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[union_remove_6_subq] 
(batchId=36)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_ptf_part_simple] 
(batchId=32)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[windowing_gby2] 
(batchId=33)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[windowing_streaming] 
(batchId=63)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[columnstats_part_coltype]
 (batchId=154)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[groupby_resolution]
 (batchId=148)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[ptf] 
(batchId=143)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[ptf_streaming]
 (batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_in]
 (batchId=153)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_notin]
 (batchId=154)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_scalar]
 (batchId=148)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_groupby4]
 (batchId=143)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_groupby6]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_groupby_grouping_id2]
 (batchId=154)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_groupby_grouping_sets2]
 (batchId=147)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_groupby_grouping_sets4]
 (batchId=146)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_groupby_grouping_sets5]
 (batchId=149)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_groupby_grouping_window]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_groupby_rollup1]
 (batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_dynamic_semijoin_reduction2]
 (batchId=149)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_dynamic_semijoin_reduction]
 (batchId=141)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[windowing] 
(batchId=151)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[vector_outer_join1]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[vector_outer_join2]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[vector_outer_join3]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[vector_outer_join4]
 (batchId=166)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[vector_outer_join5]
 (batchId=166)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[groupby_resolution] 
(batchId=114)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ptf] (batchId=104)

[jira] [Updated] (HIVE-10876) Get rid of SessionState from HiveServer2 codepath

2017-04-04 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-10876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-10876:

Summary: Get rid of SessionState from HiveServer2 codepath  (was: Get rid 
of Driver and SessionState from HiveServer2 codepath)

> Get rid of SessionState from HiveServer2 codepath
> -
>
> Key: HIVE-10876
> URL: https://issues.apache.org/jira/browse/HIVE-10876
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Reporter: Vaibhav Gumashta
>
> It appears that the Driver and SessionState abstractions are better served in 
> HiveServer2 by moving the state for a query to the Operation abstraction and 
> the state of a client session to HiveSession. Currently, the state is mixed 
> into Driver and SessionState which were abstractions from the CLI/HiveServer1 
> world. It would have made working on HIVE-4239 much easier.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HIVE-13456) JDBC: fix Statement.cancel

2017-04-04 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta resolved HIVE-13456.
-
Resolution: Fixed

HIVE-16172 fixes this.

> JDBC: fix Statement.cancel
> --
>
> Key: HIVE-13456
> URL: https://issues.apache.org/jira/browse/HIVE-13456
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, JDBC
>Affects Versions: 1.2.1, 2.0.0
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
>
> JDBC Statement.cancel is supposed to work by cancelling the underlying 
> execution and freeing resources. However, in my testing, I see it failing in 
> some runs for the same query.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16345) BeeLineDriver should be able to run qtest files which are using default database tables

2017-04-04 Thread Peter Vary (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-16345:
--
Status: Patch Available  (was: Open)

> BeeLineDriver should be able to run qtest files which are using default 
> database tables
> ---
>
> Key: HIVE-16345
> URL: https://issues.apache.org/jira/browse/HIVE-16345
> Project: Hive
>  Issue Type: Improvement
>  Components: Testing Infrastructure
>Reporter: Peter Vary
>Assignee: Peter Vary
> Attachments: HIVE-16345.patch
>
>
> It would be good to be able to run the default clientpositive tests. 
> Currently we can not do that, since we start with a specific database. We 
> should filter the query input and replace the table references



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16374) Remove serializer & deserializer interfaces

2017-04-04 Thread Vaibhav Gumashta (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955788#comment-15955788
 ] 

Vaibhav Gumashta commented on HIVE-16374:
-

With most file formats being typed and significant hive code being vectorized 
(vectorized containers in memory are also typed), I wonder if it makes sense to 
have the Serde/ObjectInspector mechanism at all going forward. From my 
experience so far, it adds a lot of code complexity and performance overhead. 

> Remove serializer & deserializer interfaces
> ---
>
> Key: HIVE-16374
> URL: https://issues.apache.org/jira/browse/HIVE-16374
> Project: Hive
>  Issue Type: Task
>  Components: Serializers/Deserializers
>Reporter: Ashutosh Chauhan
>Priority: Blocker
>
> These interfaces are deprecated in favor of their Abstract class equivalents. 
> We can remove them in 3.0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16345) BeeLineDriver should be able to run qtest files which are using default database tables

2017-04-04 Thread Peter Vary (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-16345:
--
Attachment: HIVE-16345.patch

Let's see what Jenkins thinks about this patch

> BeeLineDriver should be able to run qtest files which are using default 
> database tables
> ---
>
> Key: HIVE-16345
> URL: https://issues.apache.org/jira/browse/HIVE-16345
> Project: Hive
>  Issue Type: Improvement
>  Components: Testing Infrastructure
>Reporter: Peter Vary
>Assignee: Peter Vary
> Attachments: HIVE-16345.patch
>
>
> It would be good to be able to run the default clientpositive tests. 
> Currently we can not do that, since we start with a specific database. We 
> should filter the query input and replace the table references



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16296) use LLAP executor count to configure reducer auto-parallelism

2017-04-04 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955768#comment-15955768
 ] 

Sergey Shelukhin commented on HIVE-16296:
-

So do I, but not HiveQA apparently.

> use LLAP executor count to configure reducer auto-parallelism
> -
>
> Key: HIVE-16296
> URL: https://issues.apache.org/jira/browse/HIVE-16296
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-16296.01.patch, HIVE-16296.03.patch, 
> HIVE-16296.04.patch, HIVE-16296.05.patch, HIVE-16296.06.patch, 
> HIVE-16296.07.patch, HIVE-16296.08.patch, HIVE-16296.2.patch, HIVE-16296.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16296) use LLAP executor count to configure reducer auto-parallelism

2017-04-04 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955760#comment-15955760
 ] 

Gopal V commented on HIVE-16296:


This is a bit strange - I think HDFS has some consistent ordering of files in a 
directory when listing it. 

I'm getting consistent results without an ORDER BY (i.e the output files are 
read in reducer order - reducer 0 files first etc.).

> use LLAP executor count to configure reducer auto-parallelism
> -
>
> Key: HIVE-16296
> URL: https://issues.apache.org/jira/browse/HIVE-16296
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-16296.01.patch, HIVE-16296.03.patch, 
> HIVE-16296.04.patch, HIVE-16296.05.patch, HIVE-16296.06.patch, 
> HIVE-16296.07.patch, HIVE-16296.08.patch, HIVE-16296.2.patch, HIVE-16296.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16296) use LLAP executor count to configure reducer auto-parallelism

2017-04-04 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-16296:

Attachment: HIVE-16296.08.patch

Sort query results doesn't work so well with limit w/o order by... added order 
by everywhere

> use LLAP executor count to configure reducer auto-parallelism
> -
>
> Key: HIVE-16296
> URL: https://issues.apache.org/jira/browse/HIVE-16296
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-16296.01.patch, HIVE-16296.03.patch, 
> HIVE-16296.04.patch, HIVE-16296.05.patch, HIVE-16296.06.patch, 
> HIVE-16296.07.patch, HIVE-16296.08.patch, HIVE-16296.2.patch, HIVE-16296.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-15795) Support Accumulo Index Tables in Hive Accumulo Connector

2017-04-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955743#comment-15955743
 ] 

Hive QA commented on HIVE-15795:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12861930/HIVE-15795.2.patch

{color:green}SUCCESS:{color} +1 due to 7 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 10607 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[drop_with_concurrency]
 (batchId=234)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[escape_comments] 
(batchId=234)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr]
 (batchId=142)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4545/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4545/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4545/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12861930 - PreCommit-HIVE-Build

> Support Accumulo Index Tables in Hive Accumulo Connector
> 
>
> Key: HIVE-15795
> URL: https://issues.apache.org/jira/browse/HIVE-15795
> Project: Hive
>  Issue Type: Improvement
>  Components: Accumulo Storage Handler
>Reporter: Mike Fagan
>Assignee: Mike Fagan
>Priority: Minor
> Attachments: HIVE-15795.1.patch, HIVE-15795.2.patch
>
>
> Ability to specify an accumulo index table for an accumulo-hive table.
> This would greatly improve performance for non-rowid query predicates



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16372) Enable DDL statement for non-native tables (add/remove table properties)

2017-04-04 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-16372:
---
Status: Patch Available  (was: Open)

> Enable DDL statement for non-native tables (add/remove table properties)
> 
>
> Key: HIVE-16372
> URL: https://issues.apache.org/jira/browse/HIVE-16372
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-16372.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16372) Enable DDL statement for non-native tables (add/remove table properties)

2017-04-04 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955730#comment-15955730
 ] 

Pengcheng Xiong commented on HIVE-16372:


[~thejas], can u take a look? I will deal with the renaming in another patch.

> Enable DDL statement for non-native tables (add/remove table properties)
> 
>
> Key: HIVE-16372
> URL: https://issues.apache.org/jira/browse/HIVE-16372
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-16372.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16372) Enable DDL statement for non-native tables (add/remove table properties)

2017-04-04 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong updated HIVE-16372:
---
Attachment: HIVE-16372.01.patch

> Enable DDL statement for non-native tables (add/remove table properties)
> 
>
> Key: HIVE-16372
> URL: https://issues.apache.org/jira/browse/HIVE-16372
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-16372.01.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HIVE-16368) Unexpected java.lang.ArrayIndexOutOfBoundsException from query with LaterView Operation for hive on MR.

2017-04-04 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955718#comment-15955718
 ] 

zhihai xu edited comment on HIVE-16368 at 4/4/17 8:05 PM:
--

Yes, thanks for the review, I will add a .q test case.


was (Author: zxu):
Yes, thanks for the review, I will add a .q test case.

> Unexpected java.lang.ArrayIndexOutOfBoundsException from query with LaterView 
> Operation for hive on MR.
> ---
>
> Key: HIVE-16368
> URL: https://issues.apache.org/jira/browse/HIVE-16368
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: zhihai xu
>Assignee: zhihai xu
> Attachments: HIVE-16368.000.patch
>
>
> Unexpected java.lang.ArrayIndexOutOfBoundsException from query. It happened 
> in LaterView Operation. It happened for hive-on-mr. The reason is because the 
> column prune change the column order in LaterView operation, for back-back 
> reducesink operators using MR engine, FileSinkOperator and TableScanOperator 
> are added before the second ReduceSink operator, The serialization column 
> order used by FileSinkOperator in LazyBinarySerDe of previous reducer is 
> different from deserialization column order from table desc used by 
> MapOperator/TableScanOperator in LazyBinarySerDe of current failed mapper.
> The serialization is decided by the outputObjInspector from 
> LateralViewJoinOperator,
> {code}
> ArrayList fieldNames = conf.getOutputInternalColNames();
> outputObjInspector = ObjectInspectorFactory
> .getStandardStructObjectInspector(fieldNames, ois);
> {code}
> So the column order for serialization is decided by getOutputInternalColNames 
> in LateralViewJoinOperator.
> The deserialization is decided by TableScanOperator which is created at  
> GenMapRedUtils.splitTasks. 
> {code}
> TableDesc tt_desc = PlanUtils.getIntermediateFileTableDesc(PlanUtils
> .getFieldSchemasFromRowSchema(parent.getSchema(), "temporarycol"));
> // Create the temporary file, its corresponding FileSinkOperaotr, and
> // its corresponding TableScanOperator.
> TableScanOperator tableScanOp =
> createTemporaryFile(parent, op, taskTmpDir, tt_desc, parseCtx);
> {code}
> The column order for deserialization is decided by rowSchema of 
> LateralViewJoinOperator.
> But ColumnPrunerLateralViewJoinProc changed the order of 
> outputInternalColNames but still keep the original order of rowSchema,
> Which cause the mismatch between serialization and deserialization for two 
> back-to-back MR jobs.
> Similar issue for ColumnPrunerLateralViewForwardProc which change the column 
> order of its child selector colList but not rowSchema.
> The exception is 
> {code}
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 875968094
>   at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryUtils.byteArrayToLong(LazyBinaryUtils.java:78)
>   at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryDouble.init(LazyBinaryDouble.java:43)
>   at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.uncheckedGetField(LazyBinaryStruct.java:264)
>   at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.getField(LazyBinaryStruct.java:201)
>   at 
> org.apache.hadoop.hive.serde2.lazybinary.objectinspector.LazyBinaryStructObjectInspector.getStructFieldData(LazyBinaryStructObjectInspector.java:64)
>   at 
> org.apache.hadoop.hive.ql.exec.ExprNodeColumnEvaluator._evaluate(ExprNodeColumnEvaluator.java:94)
>   at 
> org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:77)
>   at 
> org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:65)
>   at 
> org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.makeValueWritable(ReduceSinkOperator.java:554)
>   at 
> org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:381)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16368) Unexpected java.lang.ArrayIndexOutOfBoundsException from query with LaterView Operation for hive on MR.

2017-04-04 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955718#comment-15955718
 ] 

zhihai xu commented on HIVE-16368:
--

Yes, thanks for the review, I will add a .q test case.

> Unexpected java.lang.ArrayIndexOutOfBoundsException from query with LaterView 
> Operation for hive on MR.
> ---
>
> Key: HIVE-16368
> URL: https://issues.apache.org/jira/browse/HIVE-16368
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: zhihai xu
>Assignee: zhihai xu
> Attachments: HIVE-16368.000.patch
>
>
> Unexpected java.lang.ArrayIndexOutOfBoundsException from query. It happened 
> in LaterView Operation. It happened for hive-on-mr. The reason is because the 
> column prune change the column order in LaterView operation, for back-back 
> reducesink operators using MR engine, FileSinkOperator and TableScanOperator 
> are added before the second ReduceSink operator, The serialization column 
> order used by FileSinkOperator in LazyBinarySerDe of previous reducer is 
> different from deserialization column order from table desc used by 
> MapOperator/TableScanOperator in LazyBinarySerDe of current failed mapper.
> The serialization is decided by the outputObjInspector from 
> LateralViewJoinOperator,
> {code}
> ArrayList fieldNames = conf.getOutputInternalColNames();
> outputObjInspector = ObjectInspectorFactory
> .getStandardStructObjectInspector(fieldNames, ois);
> {code}
> So the column order for serialization is decided by getOutputInternalColNames 
> in LateralViewJoinOperator.
> The deserialization is decided by TableScanOperator which is created at  
> GenMapRedUtils.splitTasks. 
> {code}
> TableDesc tt_desc = PlanUtils.getIntermediateFileTableDesc(PlanUtils
> .getFieldSchemasFromRowSchema(parent.getSchema(), "temporarycol"));
> // Create the temporary file, its corresponding FileSinkOperaotr, and
> // its corresponding TableScanOperator.
> TableScanOperator tableScanOp =
> createTemporaryFile(parent, op, taskTmpDir, tt_desc, parseCtx);
> {code}
> The column order for deserialization is decided by rowSchema of 
> LateralViewJoinOperator.
> But ColumnPrunerLateralViewJoinProc changed the order of 
> outputInternalColNames but still keep the original order of rowSchema,
> Which cause the mismatch between serialization and deserialization for two 
> back-to-back MR jobs.
> Similar issue for ColumnPrunerLateralViewForwardProc which change the column 
> order of its child selector colList but not rowSchema.
> The exception is 
> {code}
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 875968094
>   at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryUtils.byteArrayToLong(LazyBinaryUtils.java:78)
>   at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryDouble.init(LazyBinaryDouble.java:43)
>   at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.uncheckedGetField(LazyBinaryStruct.java:264)
>   at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.getField(LazyBinaryStruct.java:201)
>   at 
> org.apache.hadoop.hive.serde2.lazybinary.objectinspector.LazyBinaryStructObjectInspector.getStructFieldData(LazyBinaryStructObjectInspector.java:64)
>   at 
> org.apache.hadoop.hive.ql.exec.ExprNodeColumnEvaluator._evaluate(ExprNodeColumnEvaluator.java:94)
>   at 
> org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:77)
>   at 
> org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:65)
>   at 
> org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.makeValueWritable(ReduceSinkOperator.java:554)
>   at 
> org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:381)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16366) Hive 2.3 release planning

2017-04-04 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955683#comment-15955683
 ] 

Pengcheng Xiong commented on HIVE-16366:


[~spena], thanks a lot!!!

> Hive 2.3 release planning
> -
>
> Key: HIVE-16366
> URL: https://issues.apache.org/jira/browse/HIVE-16366
> Project: Hive
>  Issue Type: Bug
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>Priority: Blocker
>  Labels: 2.3.0
> Fix For: 2.3.0
>
> Attachments: HIVE-16366-branch-2.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16366) Hive 2.3 release planning

2017-04-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955672#comment-15955672
 ] 

Hive QA commented on HIVE-16366:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12861829/HIVE-16366-branch-2.3.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 10562 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[comments] (batchId=35)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr]
 (batchId=142)
org.apache.hive.hcatalog.api.TestHCatClient.testTransportFailure (batchId=174)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4544/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4544/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4544/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12861829 - PreCommit-HIVE-Build

> Hive 2.3 release planning
> -
>
> Key: HIVE-16366
> URL: https://issues.apache.org/jira/browse/HIVE-16366
> Project: Hive
>  Issue Type: Bug
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>Priority: Blocker
>  Labels: 2.3.0
> Fix For: 2.3.0
>
> Attachments: HIVE-16366-branch-2.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16346) inheritPerms should be conditional based on the target filesystem

2017-04-04 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-16346:

Attachment: HIVE-16346.3.patch

> inheritPerms should be conditional based on the target filesystem
> -
>
> Key: HIVE-16346
> URL: https://issues.apache.org/jira/browse/HIVE-16346
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
> Attachments: HIVE-16346.1.patch, HIVE-16346.2.patch, 
> HIVE-16346.3.patch
>
>
> Right now, a lot of the logic in {{Hive.java}} attempts to set permissions of 
> different files that have been moved / copied. This is only triggered if 
> {{hive.warehouse.subdir.inherit.perms}} is set to true.
> However, on blobstores such as S3, there is no concept of file permissions so 
> these calls are unnecessary and can could a performance impact.
> One solution would be to set {{hive.warehouse.subdir.inherit.perms}} to 
> false, but this would be a global change that affects an entire HS2 instance. 
> So HDFS tables will no longer have permissions inheritance.
> A better solution would be to make the inheritance of permissions conditional 
> on the target filesystem.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16151) BytesBytesHashTable allocates large arrays

2017-04-04 Thread Gopal V (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955654#comment-15955654
 ] 

Gopal V commented on HIVE-16151:


[~sershe]: Added to my build, will review 

> BytesBytesHashTable allocates large arrays
> --
>
> Key: HIVE-16151
> URL: https://issues.apache.org/jira/browse/HIVE-16151
> Project: Hive
>  Issue Type: Bug
>Reporter: Prasanth Jayachandran
>Assignee: Sergey Shelukhin
> Attachments: HIVE-16151.patch
>
>
> These arrays cause GC pressure and also impose key count limitations on the 
> table. Wrt the latter, we won't be able to get rid of it without a 64-bit 
> hash function, but for now we can get rid of the former. If we need the 
> latter we'd add murmur64 and probably account for it differently for resize 
> (we don't want to blow up the hashtable by 4 bytes/key in the common case 
> where #of keys is less than ~1.5B :))



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16369) Vectorization: Support PTF (Part 1: No Custom Window Framing -- Default Only)

2017-04-04 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-16369:

Status: Patch Available  (was: Open)

> Vectorization: Support PTF (Part 1: No Custom Window Framing -- Default Only)
> -
>
> Key: HIVE-16369
> URL: https://issues.apache.org/jira/browse/HIVE-16369
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-16369.01.patch
>
>
> Support vectorization of PTF that doesn't include custom PRECEDING / 
> FOLLOWING window frame clauses.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (HIVE-16311) Improve the performance for FastHiveDecimalImpl.fastDivide

2017-04-04 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955628#comment-15955628
 ] 

Xuefu Zhang edited comment on HIVE-16311 at 4/4/17 6:58 PM:


[~colin_mjj], The code changes looks good. However, I'm not sure how the test 
diff (such as below) came:
{code}
-1234567.8901234560 137174.210013717333
+1234567.8901234560 137174.210013717330
{code}


was (Author: xuefuz):
[~colin_mjj], The code changes looks good. However, I'm not sure how the test 
diff (such as below) came:
{code}
 -1234567.8901234560137174.210013717333
+1234567.8901234560 137174.210013717330
{code}

> Improve the performance for FastHiveDecimalImpl.fastDivide
> --
>
> Key: HIVE-16311
> URL: https://issues.apache.org/jira/browse/HIVE-16311
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Colin Ma
>Assignee: Colin Ma
> Fix For: 3.0.0
>
> Attachments: HIVE-16311.001.patch, HIVE-16311.002.patch, 
> HIVE-16311.003.patch, HIVE-16311.004.patch
>
>
> FastHiveDecimalImpl.fastDivide is poor performance when evaluate the 
> expression as 12345.67/123.45
> There are 2 points can be improved:
> 1. Don't always use HiveDecimal.MAX_SCALE as scale when do the 
> BigDecimal.divide.
> 2. Get the precision for BigInteger in a fast way if possible.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16311) Improve the performance for FastHiveDecimalImpl.fastDivide

2017-04-04 Thread Xuefu Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955628#comment-15955628
 ] 

Xuefu Zhang commented on HIVE-16311:


[~colin_mjj], The code changes looks good. However, I'm not sure how the test 
diff (such as below) came:
{code}
 -1234567.8901234560137174.210013717333
+1234567.8901234560 137174.210013717330
{code}

> Improve the performance for FastHiveDecimalImpl.fastDivide
> --
>
> Key: HIVE-16311
> URL: https://issues.apache.org/jira/browse/HIVE-16311
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: Colin Ma
>Assignee: Colin Ma
> Fix For: 3.0.0
>
> Attachments: HIVE-16311.001.patch, HIVE-16311.002.patch, 
> HIVE-16311.003.patch, HIVE-16311.004.patch
>
>
> FastHiveDecimalImpl.fastDivide is poor performance when evaluate the 
> expression as 12345.67/123.45
> There are 2 points can be improved:
> 1. Don't always use HiveDecimal.MAX_SCALE as scale when do the 
> BigDecimal.divide.
> 2. Get the precision for BigInteger in a fast way if possible.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16307) add IO memory usage report to LLAP UI

2017-04-04 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955623#comment-15955623
 ] 

Sergey Shelukhin commented on HIVE-16307:
-

[~prasanth_j] ping?

> add IO memory usage report to LLAP UI
> -
>
> Key: HIVE-16307
> URL: https://issues.apache.org/jira/browse/HIVE-16307
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-16307.01.patch, HIVE-16307.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16151) BytesBytesHashTable allocates large arrays

2017-04-04 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955621#comment-15955621
 ] 

Sergey Shelukhin commented on HIVE-16151:
-

[~gopalv] ping?

> BytesBytesHashTable allocates large arrays
> --
>
> Key: HIVE-16151
> URL: https://issues.apache.org/jira/browse/HIVE-16151
> Project: Hive
>  Issue Type: Bug
>Reporter: Prasanth Jayachandran
>Assignee: Sergey Shelukhin
> Attachments: HIVE-16151.patch
>
>
> These arrays cause GC pressure and also impose key count limitations on the 
> table. Wrt the latter, we won't be able to get rid of it without a 64-bit 
> hash function, but for now we can get rid of the former. If we need the 
> latter we'd add murmur64 and probably account for it differently for resize 
> (we don't want to blow up the hashtable by 4 bytes/key in the common case 
> where #of keys is less than ~1.5B :))



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16366) Hive 2.3 release planning

2017-04-04 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HIVE-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955601#comment-15955601
 ] 

Sergio Peña commented on HIVE-16366:


Hey [~pxiong]. This is a manual intervention sadly. I've just created the 
branch-2.3-mr2 profile on the ptest server. 
The test is running now 
(https://builds.apache.org/view/H-L/view/Hive/job/PreCommit-HIVE-Build/4544/)

> Hive 2.3 release planning
> -
>
> Key: HIVE-16366
> URL: https://issues.apache.org/jira/browse/HIVE-16366
> Project: Hive
>  Issue Type: Bug
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>Priority: Blocker
>  Labels: 2.3.0
> Fix For: 2.3.0
>
> Attachments: HIVE-16366-branch-2.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16171) Support replication of truncate table

2017-04-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955578#comment-15955578
 ] 

Hive QA commented on HIVE-16171:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12861917/HIVE-16171.04.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 10578 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[drop_with_concurrency]
 (batchId=234)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[escape_comments] 
(batchId=234)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr]
 (batchId=142)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4543/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4543/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4543/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12861917 - PreCommit-HIVE-Build

> Support replication of truncate table
> -
>
> Key: HIVE-16171
> URL: https://issues.apache.org/jira/browse/HIVE-16171
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Affects Versions: 2.1.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>  Labels: DR
> Attachments: HIVE-16171.01.patch, HIVE-16171.02.patch, 
> HIVE-16171.03.patch, HIVE-16171.04.patch
>
>
> Need to support truncate table for replication. Key points to note.
> 1. For non-partitioned table, truncate table will remove all the rows from 
> the table.
> 2. For partitioned tables, need to consider how truncate behaves if truncate 
> a partition or the whole table.
> 3. Bootstrap load with truncate table must work as it is just 
> loadTable/loadPartition with empty dataset.
> 4. It is suggested to re-use the alter table/alter partition events to handle 
> truncate.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-13517) Hive logs in Spark Executor and Driver should show thread-id.

2017-04-04 Thread Sahil Takiar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-13517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-13517:

Status: Open  (was: Patch Available)

> Hive logs in Spark Executor and Driver should show thread-id.
> -
>
> Key: HIVE-13517
> URL: https://issues.apache.org/jira/browse/HIVE-13517
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 2.0.0, 1.2.1
>Reporter: Szehon Ho
>Assignee: Sahil Takiar
> Attachments: executor-driver-log.PNG, HIVE-13517.1.patch, 
> HIVE-13517.2.patch
>
>
> In Spark, there might be more than one task running in one executor. 
> Similarly, there may be more than one thread running in Driver.
> This makes debugging through the logs a nightmare. It would be great if there 
> could be thread-ids in the logs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-15795) Support Accumulo Index Tables in Hive Accumulo Connector

2017-04-04 Thread Mike Fagan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Fagan updated HIVE-15795:
--
Status: Patch Available  (was: In Progress)

> Support Accumulo Index Tables in Hive Accumulo Connector
> 
>
> Key: HIVE-15795
> URL: https://issues.apache.org/jira/browse/HIVE-15795
> Project: Hive
>  Issue Type: Improvement
>  Components: Accumulo Storage Handler
>Reporter: Mike Fagan
>Assignee: Mike Fagan
>Priority: Minor
> Attachments: HIVE-15795.1.patch, HIVE-15795.2.patch
>
>
> Ability to specify an accumulo index table for an accumulo-hive table.
> This would greatly improve performance for non-rowid query predicates



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-15795) Support Accumulo Index Tables in Hive Accumulo Connector

2017-04-04 Thread Mike Fagan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Fagan updated HIVE-15795:
--
Status: In Progress  (was: Patch Available)

> Support Accumulo Index Tables in Hive Accumulo Connector
> 
>
> Key: HIVE-15795
> URL: https://issues.apache.org/jira/browse/HIVE-15795
> Project: Hive
>  Issue Type: Improvement
>  Components: Accumulo Storage Handler
>Reporter: Mike Fagan
>Assignee: Mike Fagan
>Priority: Minor
> Attachments: HIVE-15795.1.patch, HIVE-15795.2.patch
>
>
> Ability to specify an accumulo index table for an accumulo-hive table.
> This would greatly improve performance for non-rowid query predicates



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-15795) Support Accumulo Index Tables in Hive Accumulo Connector

2017-04-04 Thread Mike Fagan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Fagan updated HIVE-15795:
--
Attachment: HIVE-15795.2.patch

latest patch based on the review feedback.

> Support Accumulo Index Tables in Hive Accumulo Connector
> 
>
> Key: HIVE-15795
> URL: https://issues.apache.org/jira/browse/HIVE-15795
> Project: Hive
>  Issue Type: Improvement
>  Components: Accumulo Storage Handler
>Reporter: Mike Fagan
>Assignee: Mike Fagan
>Priority: Minor
> Attachments: HIVE-15795.1.patch, HIVE-15795.2.patch
>
>
> Ability to specify an accumulo index table for an accumulo-hive table.
> This would greatly improve performance for non-rowid query predicates



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HIVE-16372) Enable DDL statement for non-native tables (add/remove table properties)

2017-04-04 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong reassigned HIVE-16372:
--


> Enable DDL statement for non-native tables (add/remove table properties)
> 
>
> Key: HIVE-16372
> URL: https://issues.apache.org/jira/browse/HIVE-16372
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HIVE-16373) Enable DDL statement for non-native tables (rename table)

2017-04-04 Thread Pengcheng Xiong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pengcheng Xiong reassigned HIVE-16373:
--


> Enable DDL statement for non-native tables (rename table)
> -
>
> Key: HIVE-16373
> URL: https://issues.apache.org/jira/browse/HIVE-16373
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16371) Add bitmap selection strategy for druid storage handler

2017-04-04 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955531#comment-15955531
 ] 

Jesus Camacho Rodriguez commented on HIVE-16371:


LGTM, +1

Minor comment: maybe you can use if-else in the initialization to prevent 
calling constructor twice?

> Add bitmap selection strategy for druid storage handler
> ---
>
> Key: HIVE-16371
> URL: https://issues.apache.org/jira/browse/HIVE-16371
> Project: Hive
>  Issue Type: Improvement
>  Components: Druid integration
>Affects Versions: storage-2.2.0
>Reporter: slim bouguerra
>Assignee: slim bouguerra
> Fix For: 3.0.0
>
> Attachments: HIVE-16371.patch
>
>
> Currently only Concise Bitmap strategy is supported.
> This Pr is to make Roaring bitmap encoding the default and Concise optional 
> if needed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-15996) Implement multiargument GROUPING function

2017-04-04 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-15996:
---
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master, thanks for reviewing [~ashutoshc]!

> Implement multiargument GROUPING function
> -
>
> Key: HIVE-15996
> URL: https://issues.apache.org/jira/browse/HIVE-15996
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 2.2.0
>Reporter: Carter Shanklin
>Assignee: Jesus Camacho Rodriguez
> Fix For: 3.0.0
>
> Attachments: HIVE-15996.01.patch, HIVE-15996.02.patch, 
> HIVE-15996.03.patch, HIVE-15996.04.patch
>
>
> Per the SQL standard section 6.9:
> GROUPING ( CR1, ..., CRN-1, CRN )
> is equivalent to:
> CAST ( ( 2 * GROUPING ( CR1, ..., CRN-1 ) + GROUPING ( CRN ) ) AS IDT )
> So for example:
> select c1, c2, c3, grouping(c1, c2, c3) from e011_02 group by rollup(c1, c2, 
> c3);
> Should be allowed and equivalent to:
> select c1, c2, c3, 4*grouping(c1) + 2*grouping(c2) + grouping(c3) from 
> e011_02 group by rollup(c1, c2, c3);



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16366) Hive 2.3 release planning

2017-04-04 Thread Pengcheng Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955515#comment-15955515
 ] 

Pengcheng Xiong commented on HIVE-16366:


[~spena], sorry to trouble you. I am trying to get a ptest run for 2.3 branch. 
I followed exactly the same steps that you mentioned in HIVE-15007. However, i 
got the following error:
{code}
Exception in thread "main" java.lang.RuntimeException: Status 
[name=ILLEGAL_ARGUMENT, message=Profile branch-2.3-mr2 not found in directory 
/usr/local/hiveptest/profiles]
at org.apache.hive.ptest.api.Status.assertOKOrFailed(Status.java:69)
at 
org.apache.hive.ptest.api.client.PTestClient.testTailLog(PTestClient.java:178)
at 
org.apache.hive.ptest.api.client.PTestClient.testStart(PTestClient.java:135)
at 
org.apache.hive.ptest.api.client.PTestClient.main(PTestClient.java:320)
{code}

However, my branch remotes/origin/branch-2.3 can be seen.
Could u help me on this? Thanks.

> Hive 2.3 release planning
> -
>
> Key: HIVE-16366
> URL: https://issues.apache.org/jira/browse/HIVE-16366
> Project: Hive
>  Issue Type: Bug
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
>Priority: Blocker
>  Labels: 2.3.0
> Fix For: 2.3.0
>
> Attachments: HIVE-16366-branch-2.3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16371) Add bitmap selection strategy for druid storage handler

2017-04-04 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955491#comment-15955491
 ] 

Hive QA commented on HIVE-16371:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12861911/HIVE-16371.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 10576 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[drop_with_concurrency]
 (batchId=234)
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[escape_comments] 
(batchId=234)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_if_expr]
 (batchId=142)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/4542/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/4542/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-4542/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12861911 - PreCommit-HIVE-Build

> Add bitmap selection strategy for druid storage handler
> ---
>
> Key: HIVE-16371
> URL: https://issues.apache.org/jira/browse/HIVE-16371
> Project: Hive
>  Issue Type: Improvement
>  Components: Druid integration
>Affects Versions: storage-2.2.0
>Reporter: slim bouguerra
>Assignee: slim bouguerra
> Fix For: 3.0.0
>
> Attachments: HIVE-16371.patch
>
>
> Currently only Concise Bitmap strategy is supported.
> This Pr is to make Roaring bitmap encoding the default and Concise optional 
> if needed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16349) Enable DDL statement for non-native tables

2017-04-04 Thread Thejas M Nair (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955476#comment-15955476
 ] 

Thejas M Nair commented on HIVE-16349:
--

Thanks for the updated patch [~pxiong].
This is also adding support for rename. What is the expected behavior for 
managed tables, in case of rename ? Is it supposed to rename at the storage 
level as well ? We should clarify what the expected behavior here is. Can you 
also add test to make sure that we can read from the renamed hive-hbase table 
and also drop it ? (If you prefer, we can move the rename part to separate 
jira).


> Enable DDL statement for non-native tables
> --
>
> Key: HIVE-16349
> URL: https://issues.apache.org/jira/browse/HIVE-16349
> Project: Hive
>  Issue Type: New Feature
>Reporter: Pengcheng Xiong
>Assignee: Pengcheng Xiong
> Attachments: HIVE-16349.01.patch, HIVE-16349.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-15996) Implement multiargument GROUPING function

2017-04-04 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955468#comment-15955468
 ] 

Ashutosh Chauhan commented on HIVE-15996:
-

+1

> Implement multiargument GROUPING function
> -
>
> Key: HIVE-15996
> URL: https://issues.apache.org/jira/browse/HIVE-15996
> Project: Hive
>  Issue Type: New Feature
>Affects Versions: 2.2.0
>Reporter: Carter Shanklin
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-15996.01.patch, HIVE-15996.02.patch, 
> HIVE-15996.03.patch, HIVE-15996.04.patch
>
>
> Per the SQL standard section 6.9:
> GROUPING ( CR1, ..., CRN-1, CRN )
> is equivalent to:
> CAST ( ( 2 * GROUPING ( CR1, ..., CRN-1 ) + GROUPING ( CRN ) ) AS IDT )
> So for example:
> select c1, c2, c3, grouping(c1, c2, c3) from e011_02 group by rollup(c1, c2, 
> c3);
> Should be allowed and equivalent to:
> select c1, c2, c3, 4*grouping(c1) + 2*grouping(c2) + grouping(c3) from 
> e011_02 group by rollup(c1, c2, c3);



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-15986) Support "is [not] distinct from"

2017-04-04 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955463#comment-15955463
 ] 

Vineet Garg commented on HIVE-15986:


Review board created and linked to the JIRA

> Support "is [not] distinct from"
> 
>
> Key: HIVE-15986
> URL: https://issues.apache.org/jira/browse/HIVE-15986
> Project: Hive
>  Issue Type: Sub-task
>  Components: SQL
>Reporter: Carter Shanklin
>Assignee: Vineet Garg
> Attachments: HIVE-15986.1.patch
>
>
> Support standard "is [not] distinct from" syntax. For example this gives a 
> standard way to do a comparison to null safe join: select * from t1 join t2 
> on t1.x is not distinct from t2.y. SQL standard reference Section 8.15



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-13517) Hive logs in Spark Executor and Driver should show thread-id.

2017-04-04 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955462#comment-15955462
 ] 

Sahil Takiar commented on HIVE-13517:
-

Sounds good [~xuefuz], thanks for spending time to take a look at this!

> Hive logs in Spark Executor and Driver should show thread-id.
> -
>
> Key: HIVE-13517
> URL: https://issues.apache.org/jira/browse/HIVE-13517
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Affects Versions: 1.2.1, 2.0.0
>Reporter: Szehon Ho
>Assignee: Sahil Takiar
> Attachments: executor-driver-log.PNG, HIVE-13517.1.patch, 
> HIVE-13517.2.patch
>
>
> In Spark, there might be more than one task running in one executor. 
> Similarly, there may be more than one thread running in Driver.
> This makes debugging through the logs a nightmare. It would be great if there 
> could be thread-ids in the logs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16368) Unexpected java.lang.ArrayIndexOutOfBoundsException from query with LaterView Operation for hive on MR.

2017-04-04 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955461#comment-15955461
 ] 

Ashutosh Chauhan commented on HIVE-16368:
-

Can you please add a .q test case with the patch?

> Unexpected java.lang.ArrayIndexOutOfBoundsException from query with LaterView 
> Operation for hive on MR.
> ---
>
> Key: HIVE-16368
> URL: https://issues.apache.org/jira/browse/HIVE-16368
> Project: Hive
>  Issue Type: Bug
>  Components: Query Planning
>Reporter: zhihai xu
>Assignee: zhihai xu
> Attachments: HIVE-16368.000.patch
>
>
> Unexpected java.lang.ArrayIndexOutOfBoundsException from query. It happened 
> in LaterView Operation. It happened for hive-on-mr. The reason is because the 
> column prune change the column order in LaterView operation, for back-back 
> reducesink operators using MR engine, FileSinkOperator and TableScanOperator 
> are added before the second ReduceSink operator, The serialization column 
> order used by FileSinkOperator in LazyBinarySerDe of previous reducer is 
> different from deserialization column order from table desc used by 
> MapOperator/TableScanOperator in LazyBinarySerDe of current failed mapper.
> The serialization is decided by the outputObjInspector from 
> LateralViewJoinOperator,
> {code}
> ArrayList fieldNames = conf.getOutputInternalColNames();
> outputObjInspector = ObjectInspectorFactory
> .getStandardStructObjectInspector(fieldNames, ois);
> {code}
> So the column order for serialization is decided by getOutputInternalColNames 
> in LateralViewJoinOperator.
> The deserialization is decided by TableScanOperator which is created at  
> GenMapRedUtils.splitTasks. 
> {code}
> TableDesc tt_desc = PlanUtils.getIntermediateFileTableDesc(PlanUtils
> .getFieldSchemasFromRowSchema(parent.getSchema(), "temporarycol"));
> // Create the temporary file, its corresponding FileSinkOperaotr, and
> // its corresponding TableScanOperator.
> TableScanOperator tableScanOp =
> createTemporaryFile(parent, op, taskTmpDir, tt_desc, parseCtx);
> {code}
> The column order for deserialization is decided by rowSchema of 
> LateralViewJoinOperator.
> But ColumnPrunerLateralViewJoinProc changed the order of 
> outputInternalColNames but still keep the original order of rowSchema,
> Which cause the mismatch between serialization and deserialization for two 
> back-to-back MR jobs.
> Similar issue for ColumnPrunerLateralViewForwardProc which change the column 
> order of its child selector colList but not rowSchema.
> The exception is 
> {code}
> Caused by: java.lang.ArrayIndexOutOfBoundsException: 875968094
>   at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryUtils.byteArrayToLong(LazyBinaryUtils.java:78)
>   at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryDouble.init(LazyBinaryDouble.java:43)
>   at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.uncheckedGetField(LazyBinaryStruct.java:264)
>   at 
> org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.getField(LazyBinaryStruct.java:201)
>   at 
> org.apache.hadoop.hive.serde2.lazybinary.objectinspector.LazyBinaryStructObjectInspector.getStructFieldData(LazyBinaryStructObjectInspector.java:64)
>   at 
> org.apache.hadoop.hive.ql.exec.ExprNodeColumnEvaluator._evaluate(ExprNodeColumnEvaluator.java:94)
>   at 
> org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:77)
>   at 
> org.apache.hadoop.hive.ql.exec.ExprNodeEvaluator.evaluate(ExprNodeEvaluator.java:65)
>   at 
> org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.makeValueWritable(ReduceSinkOperator.java:554)
>   at 
> org.apache.hadoop.hive.ql.exec.ReduceSinkOperator.processOp(ReduceSinkOperator.java:381)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16171) Support replication of truncate table

2017-04-04 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-16171:

Status: Patch Available  (was: Open)

> Support replication of truncate table
> -
>
> Key: HIVE-16171
> URL: https://issues.apache.org/jira/browse/HIVE-16171
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Affects Versions: 2.1.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>  Labels: DR
> Attachments: HIVE-16171.01.patch, HIVE-16171.02.patch, 
> HIVE-16171.03.patch, HIVE-16171.04.patch
>
>
> Need to support truncate table for replication. Key points to note.
> 1. For non-partitioned table, truncate table will remove all the rows from 
> the table.
> 2. For partitioned tables, need to consider how truncate behaves if truncate 
> a partition or the whole table.
> 3. Bootstrap load with truncate table must work as it is just 
> loadTable/loadPartition with empty dataset.
> 4. It is suggested to re-use the alter table/alter partition events to handle 
> truncate.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16171) Support replication of truncate table

2017-04-04 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-16171:

Attachment: (was: HIVE-16171.04.patch)

> Support replication of truncate table
> -
>
> Key: HIVE-16171
> URL: https://issues.apache.org/jira/browse/HIVE-16171
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Affects Versions: 2.1.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>  Labels: DR
> Attachments: HIVE-16171.01.patch, HIVE-16171.02.patch, 
> HIVE-16171.03.patch, HIVE-16171.04.patch
>
>
> Need to support truncate table for replication. Key points to note.
> 1. For non-partitioned table, truncate table will remove all the rows from 
> the table.
> 2. For partitioned tables, need to consider how truncate behaves if truncate 
> a partition or the whole table.
> 3. Bootstrap load with truncate table must work as it is just 
> loadTable/loadPartition with empty dataset.
> 4. It is suggested to re-use the alter table/alter partition events to handle 
> truncate.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16171) Support replication of truncate table

2017-04-04 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-16171:

Status: Open  (was: Patch Available)

> Support replication of truncate table
> -
>
> Key: HIVE-16171
> URL: https://issues.apache.org/jira/browse/HIVE-16171
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl
>Affects Versions: 2.1.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>  Labels: DR
> Attachments: HIVE-16171.01.patch, HIVE-16171.02.patch, 
> HIVE-16171.03.patch, HIVE-16171.04.patch
>
>
> Need to support truncate table for replication. Key points to note.
> 1. For non-partitioned table, truncate table will remove all the rows from 
> the table.
> 2. For partitioned tables, need to consider how truncate behaves if truncate 
> a partition or the whole table.
> 3. Bootstrap load with truncate table must work as it is just 
> loadTable/loadPartition with empty dataset.
> 4. It is suggested to re-use the alter table/alter partition events to handle 
> truncate.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-15724) getPrimaryKeys and getForeignKeys in metastore does not normalize db and table name

2017-04-04 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-15724:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks, Daniel!

> getPrimaryKeys and getForeignKeys in metastore does not normalize db and 
> table name
> ---
>
> Key: HIVE-15724
> URL: https://issues.apache.org/jira/browse/HIVE-15724
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Fix For: 3.0.0
>
> Attachments: HIVE-15724.1.patch, HIVE-15724.2.patch
>
>
> In db, everything is lower case. When we retrieve constraints back, we need 
> to normalize dbname/tablename. Otherwise, the following sample script fail:
> alter table Table9 add constraint pk1 primary key (a) disable novalidate;
> ALTER TABLE Table9 drop constraint pk1;
> Error message: InvalidObjectException(message:The constraint: pk1 does not 
> exist for the associated table: default.Table9



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16254) metadata for values temporary tables for INSERTs are getting replicated during bootstrap

2017-04-04 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-16254:

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed to master. Thanks, Anishek!

> metadata for values temporary tables for INSERTs are getting replicated 
> during bootstrap
> 
>
> Key: HIVE-16254
> URL: https://issues.apache.org/jira/browse/HIVE-16254
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.2.0
>Reporter: anishek
>Assignee: anishek
> Fix For: 3.0.0
>
> Attachments: HIVE-16254.3.patch, HIVE-16254.4.patch, 
> HIVE-16254.5.patch
>
>
> create table a (age int);
> insert into table a values (34),(4);
> repl dump default;
> there is a temporary table created as  values__tmp__table__[nmber], which is 
> also present in the dumped information with only metadata, this should not be 
> processed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16371) Add bitmap selection strategy for druid storage handler

2017-04-04 Thread slim bouguerra (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955368#comment-15955368
 ] 

slim bouguerra commented on HIVE-16371:
---

[~jcamachorodriguez] can you please look at this small patch.

> Add bitmap selection strategy for druid storage handler
> ---
>
> Key: HIVE-16371
> URL: https://issues.apache.org/jira/browse/HIVE-16371
> Project: Hive
>  Issue Type: Improvement
>  Components: Druid integration
>Affects Versions: storage-2.2.0
>Reporter: slim bouguerra
>Assignee: slim bouguerra
> Fix For: 3.0.0
>
> Attachments: HIVE-16371.patch
>
>
> Currently only Concise Bitmap strategy is supported.
> This Pr is to make Roaring bitmap encoding the default and Concise optional 
> if needed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16371) Add bitmap selection strategy for druid storage handler

2017-04-04 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated HIVE-16371:
--
Fix Version/s: 3.0.0
Affects Version/s: storage-2.2.0
   Status: Patch Available  (was: Open)

> Add bitmap selection strategy for druid storage handler
> ---
>
> Key: HIVE-16371
> URL: https://issues.apache.org/jira/browse/HIVE-16371
> Project: Hive
>  Issue Type: Improvement
>  Components: Druid integration
>Affects Versions: storage-2.2.0
>Reporter: slim bouguerra
>Assignee: slim bouguerra
> Fix For: 3.0.0
>
>
> Currently only Concise Bitmap strategy is supported.
> This Pr is to make Roaring bitmap encoding the default and Concise optional 
> if needed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-16206) Make Codahale metrics reporters pluggable

2017-04-04 Thread Sunitha Beeram (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955363#comment-15955363
 ] 

Sunitha Beeram commented on HIVE-16206:
---

Thanks [~cwsteinbach]. [~leftylev], I've updated the documentation for the 
configuration parameters introduced/deprecated through this commit 
(https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties). 
Let me know if anything else needs to be updated.

> Make Codahale metrics reporters pluggable
> -
>
> Key: HIVE-16206
> URL: https://issues.apache.org/jira/browse/HIVE-16206
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 2.1.2
>Reporter: Sunitha Beeram
>Assignee: Sunitha Beeram
> Fix For: 3.0.0
>
> Attachments: HIVE-16206.2.patch, HIVE-16206.3.patch, 
> HIVE-16206.4.patch, HIVE-16206.5.patch, HIVE-16206.6.patch, 
> HIVE-16206.7.patch, HIVE-16206.patch
>
>
> Hive metrics code currently allows pluggable metrics handlers - ie, handlers 
> that take care of providing interfaces for metrics collection as well as a 
> reporting; one of the 'handlers' is CodahaleMetrics. Codahale can work with 
> different reporters - currently supported ones are Console, JMX, JSON file 
> and hadoop2 sink. However, adding a new reporter involves changing that 
> class. We would like to make this conf driven just the way MetricsFactory 
> handles configurable Metrics classes.
> Scope of work:
> - Provide a new configuration option, HIVE_CODAHALE_REPORTER_CLASSES that 
> enumerates classes (like HIVE_METRICS_CLASS and unlike HIVE_METRICS_REPORTER).
> - Move JsonFileReporter into its own class.
> - Update CodahaleMetrics.java to read new config option and if the new option 
> is not present, look for the old option and instantiate accordingly) - ie, 
> make the code backward compatible.
> - Update and add new tests.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16254) metadata for values temporary tables for INSERTs are getting replicated during bootstrap

2017-04-04 Thread Sushanth Sowmyan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sushanth Sowmyan updated HIVE-16254:

Summary: metadata for values temporary tables for INSERTs are getting 
replicated during bootstrap  (was: metadata for values temporary tables for 
INSERT's are getting replicated)

> metadata for values temporary tables for INSERTs are getting replicated 
> during bootstrap
> 
>
> Key: HIVE-16254
> URL: https://issues.apache.org/jira/browse/HIVE-16254
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 2.2.0
>Reporter: anishek
>Assignee: anishek
> Attachments: HIVE-16254.3.patch, HIVE-16254.4.patch, 
> HIVE-16254.5.patch
>
>
> create table a (age int);
> insert into table a values (34),(4);
> repl dump default;
> there is a temporary table created as  values__tmp__table__[nmber], which is 
> also present in the dumped information with only metadata, this should not be 
> processed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HIVE-16371) Add bitmap selection strategy for druid storage handler

2017-04-04 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-16371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra reassigned HIVE-16371:
-


> Add bitmap selection strategy for druid storage handler
> ---
>
> Key: HIVE-16371
> URL: https://issues.apache.org/jira/browse/HIVE-16371
> Project: Hive
>  Issue Type: Improvement
>  Components: Druid integration
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>
> Currently only Concise Bitmap strategy is supported.
> This Pr is to make Roaring bitmap encoding the default and Concise optional 
> if needed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HIVE-16164) Provide mechanism for passing HMS notification ID between transactional and non-transactional listeners.

2017-04-04 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/HIVE-16164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergio Peña updated HIVE-16164:
---
   Resolution: Fixed
Fix Version/s: 3.0.0
   2.3.0
   Status: Resolved  (was: Patch Available)

> Provide mechanism for passing HMS notification ID between transactional and 
> non-transactional listeners.
> 
>
> Key: HIVE-16164
> URL: https://issues.apache.org/jira/browse/HIVE-16164
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Reporter: Sergio Peña
>Assignee: Sergio Peña
> Fix For: 2.3.0, 3.0.0
>
> Attachments: HIVE-16164.1.patch, HIVE-16164.2.patch, 
> HIVE-16164.3.patch, HIVE-16164.6.patch, HIVE-16164.7.patch, HIVE-16164.8.patch
>
>
> The HMS DB notification listener currently stores an event ID on the HMS 
> backend DB so that external applications (such as backup apps) can request 
> incremental notifications based on the last event ID requested.
> The HMS DB notification and backup applications are asynchronous. However, 
> there are sometimes that applications may be required to be in sync with the 
> latest HMS event in order to process an action. These applications will 
> provide a listener implementation that is called by the HMS after an HMS 
> transaction happened.
> The problem is that the listener running after the transaction (or during the 
> non-transactional context) may need the DB event ID in order to sync all 
> events happened previous to that event ID, but this ID is never passed to the 
> non-transactional listeners.
> We can pass this event information through the EnvironmentContext found on 
> each ListenerEvent implementations (such as CreateTableEvent), and send the 
> EnvironmentContext to the non-transactional listeners to get the event ID.
> The DbNotificactionListener already knows the event ID after calling the 
> ObjectStore.addNotificationEvent(). We just need to set this event ID to the 
> EnvironmentContext from each of the event notifications and make sure that 
> this EnvironmentContext is sent to the non-transactional listeners.
> Here's the code example when creating a table on {{create_table_core}}:
> {noformat}
>  ms.createTable(tbl);
>   if (transactionalListeners.size() > 0) {
> CreateTableEvent createTableEvent = new CreateTableEvent(tbl, true, this);
> createTableEvent.setEnvironmentContext(envContext);
> for (MetaStoreEventListener transactionalListener : 
> transactionalListeners) {
>   transactionalListener.onCreateTable(createTableEvent); // <- 
> Here the notification ID is generated
> }
>   }
>   success = ms.commitTransaction();
> } finally {
>   if (!success) {
> ms.rollbackTransaction();
> if (madeDir) {
>   wh.deleteDir(tblPath, true);
> }
>   }
>   for (MetaStoreEventListener listener : listeners) {
> CreateTableEvent createTableEvent =
> new CreateTableEvent(tbl, success, this);
> createTableEvent.setEnvironmentContext(envContext);
> listener.onCreateTable(createTableEvent);// <- 
> Here we would like to consume notification ID
>   }
> {noformat}
> We could use a specific key name that will be used on the EnvironmentContext, 
> such as DB_NOTIFICATION_EVENT_ID.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HIVE-15795) Support Accumulo Index Tables in Hive Accumulo Connector

2017-04-04 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15955215#comment-15955215
 ] 

Josh Elser commented on HIVE-15795:
---

[~sushanth], [~thejas], either of you fine gentleman able to help shepherd this 
change in? Mike's latest on reviewboard is awesome and would be a great 
improvement.

> Support Accumulo Index Tables in Hive Accumulo Connector
> 
>
> Key: HIVE-15795
> URL: https://issues.apache.org/jira/browse/HIVE-15795
> Project: Hive
>  Issue Type: Improvement
>  Components: Accumulo Storage Handler
>Reporter: Mike Fagan
>Assignee: Mike Fagan
>Priority: Minor
> Attachments: HIVE-15795.1.patch
>
>
> Ability to specify an accumulo index table for an accumulo-hive table.
> This would greatly improve performance for non-rowid query predicates



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


  1   2   >