[jira] [Updated] (HIVE-22510) Support decimal64 operations for column operands with different scales

2019-11-21 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-22510:

Attachment: HIVE-22510.7.patch
Status: Patch Available  (was: Open)

> Support decimal64 operations for column operands with different scales
> --
>
> Key: HIVE-22510
> URL: https://issues.apache.org/jira/browse/HIVE-22510
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-22510.2.patch, HIVE-22510.3.patch, 
> HIVE-22510.4.patch, HIVE-22510.5.patch, HIVE-22510.7.patch
>
>
> Right now, if the operands on the decimal64 operations are columns with 
> different scales, then we do not use the decimal64 vectorized version and 
> fall back to HiveDecimal vectorized version of the operator. In this Jira, we 
> will check if we can use decimal64 vectorized version, even if the scales are 
> different.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22510) Support decimal64 operations for column operands with different scales

2019-11-21 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-22510:

Status: Open  (was: Patch Available)

> Support decimal64 operations for column operands with different scales
> --
>
> Key: HIVE-22510
> URL: https://issues.apache.org/jira/browse/HIVE-22510
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-22510.2.patch, HIVE-22510.3.patch, 
> HIVE-22510.4.patch, HIVE-22510.5.patch, HIVE-22510.7.patch
>
>
> Right now, if the operands on the decimal64 operations are columns with 
> different scales, then we do not use the decimal64 vectorized version and 
> fall back to HiveDecimal vectorized version of the operator. In this Jira, we 
> will check if we can use decimal64 vectorized version, even if the scales are 
> different.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22483) Vectorize UDF datetime_legacy_hybrid_calendar

2019-11-21 Thread Karen Coppage (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-22483:
-
Attachment: HIVE-22483.05.patch
Status: Patch Available  (was: Open)

> Vectorize UDF datetime_legacy_hybrid_calendar
> -
>
> Key: HIVE-22483
> URL: https://issues.apache.org/jira/browse/HIVE-22483
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22483.01.patch, HIVE-22483.02.patch, 
> HIVE-22483.03.patch, HIVE-22483.04.patch, HIVE-22483.04.patch, 
> HIVE-22483.04.patch, HIVE-22483.05.patch, HIVE-22483.05.patch, 
> HIVE-22483.05.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22483) Vectorize UDF datetime_legacy_hybrid_calendar

2019-11-21 Thread Karen Coppage (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-22483:
-
Status: Open  (was: Patch Available)

> Vectorize UDF datetime_legacy_hybrid_calendar
> -
>
> Key: HIVE-22483
> URL: https://issues.apache.org/jira/browse/HIVE-22483
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22483.01.patch, HIVE-22483.02.patch, 
> HIVE-22483.03.patch, HIVE-22483.04.patch, HIVE-22483.04.patch, 
> HIVE-22483.04.patch, HIVE-22483.05.patch, HIVE-22483.05.patch, 
> HIVE-22483.05.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22510) Support decimal64 operations for column operands with different scales

2019-11-21 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979935#comment-16979935
 ] 

Hive QA commented on HIVE-22510:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12986443/HIVE-22510.5.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 17719 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[metadata_only_queries_with_filters]
 (batchId=77)
org.apache.hadoop.hive.schq.TestScheduledQueryIntegration.testScheduledQueryExecutionImpersonation
 (batchId=279)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19541/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19541/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19541/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 2 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12986443 - PreCommit-HIVE-Build

> Support decimal64 operations for column operands with different scales
> --
>
> Key: HIVE-22510
> URL: https://issues.apache.org/jira/browse/HIVE-22510
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-22510.2.patch, HIVE-22510.3.patch, 
> HIVE-22510.4.patch, HIVE-22510.5.patch
>
>
> Right now, if the operands on the decimal64 operations are columns with 
> different scales, then we do not use the decimal64 vectorized version and 
> fall back to HiveDecimal vectorized version of the operator. In this Jira, we 
> will check if we can use decimal64 vectorized version, even if the scales are 
> different.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22512) Use direct SQL to fetch column privileges in refreshPrivileges

2019-11-21 Thread Ashutosh Bapat (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Bapat updated HIVE-22512:
--
Attachment: HIVE-22512.03.patch
Status: Patch Available  (was: In Progress)

Patch with Mahesh's comments addressed.

> Use direct SQL to fetch column privileges in refreshPrivileges
> --
>
> Key: HIVE-22512
> URL: https://issues.apache.org/jira/browse/HIVE-22512
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22512.01.patch, HIVE-22512.02.patch, 
> HIVE-22512.03.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> refreshPrivileges() calls listTableAllColumnGrants() to fetch the column 
> level privileges. The later function retrieves the individual column objects 
> by firing one query per column privilege object, thus causing the backend db 
> to be swamped by these queries when PrivilegeSynchronizer is run. 
> PrivilegeSynchronizer synchronizes privileges of all the databases, tables 
> and columns and thus the backend db can get swamped really bad when there are 
> thousands of tables with hundreds of columns.
> The output of listTableAllColumnGrants() is not used completely so all the 
> columns the PM has tried to retrieves anyway goes waste.
> Fix this by using direct SQL to fetch column privileges.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22512) Use direct SQL to fetch column privileges in refreshPrivileges

2019-11-21 Thread Ashutosh Bapat (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Bapat updated HIVE-22512:
--
Status: In Progress  (was: Patch Available)

> Use direct SQL to fetch column privileges in refreshPrivileges
> --
>
> Key: HIVE-22512
> URL: https://issues.apache.org/jira/browse/HIVE-22512
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22512.01.patch, HIVE-22512.02.patch, 
> HIVE-22512.03.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> refreshPrivileges() calls listTableAllColumnGrants() to fetch the column 
> level privileges. The later function retrieves the individual column objects 
> by firing one query per column privilege object, thus causing the backend db 
> to be swamped by these queries when PrivilegeSynchronizer is run. 
> PrivilegeSynchronizer synchronizes privileges of all the databases, tables 
> and columns and thus the backend db can get swamped really bad when there are 
> thousands of tables with hundreds of columns.
> The output of listTableAllColumnGrants() is not used completely so all the 
> columns the PM has tried to retrieves anyway goes waste.
> Fix this by using direct SQL to fetch column privileges.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22510) Support decimal64 operations for column operands with different scales

2019-11-21 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979909#comment-16979909
 ] 

Hive QA commented on HIVE-22510:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
3s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
19s{color} | {color:blue} ql in master has 1539 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
51s{color} | {color:red} ql: The patch generated 2 new + 794 unchanged - 1 
fixed = 796 total (was 795) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 29m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-19541/dev-support/hive-personality.sh
 |
| git revision | master / 13fc651 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19541/yetus/diff-checkstyle-ql.txt
 |
| modules | C: vector-code-gen ql itests U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19541/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Support decimal64 operations for column operands with different scales
> --
>
> Key: HIVE-22510
> URL: https://issues.apache.org/jira/browse/HIVE-22510
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-22510.2.patch, HIVE-22510.3.patch, 
> HIVE-22510.4.patch, HIVE-22510.5.patch
>
>
> Right now, if the operands on the decimal64 operations are columns with 
> different scales, then we do not use the decimal64 vectorized version and 
> fall back to HiveDecimal vectorized version of the operator. In this Jira, we 
> will check if we can use decimal64 vectorized version, even if the scales are 
> different.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22499) LLAP: Add an EncodedReaderOptions to extend ORC impl for options

2019-11-21 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979889#comment-16979889
 ] 

Hive QA commented on HIVE-22499:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12986442/HIVE-22499.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19540/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19540/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19540/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2019-11-22 06:08:32.940
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-19540/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2019-11-22 06:08:32.943
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 13fc651 HIVE-22369 Handle HiveTableFunctionScan at return path 
(Miklos Gergely, reviewed by Jesus Camacho Rodriguez)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 13fc651 HIVE-22369 Handle HiveTableFunctionScan at return path 
(Miklos Gergely, reviewed by Jesus Camacho Rodriguez)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2019-11-22 06:08:34.187
+ rm -rf ../yetus_PreCommit-HIVE-Build-19540
+ mkdir ../yetus_PreCommit-HIVE-Build-19540
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-19540
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-19540/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
Going to apply patch with: git apply -p0
+ [[ maven == \m\a\v\e\n ]]
+ rm -rf /data/hiveptest/working/maven/org/apache/hive
+ mvn -B clean install -DskipTests -T 4 -q 
-Dmaven.repo.local=/data/hiveptest/working/maven
protoc-jar: executing: [/tmp/protoc4225362482353215326.exe, --version]
libprotoc 2.5.0
protoc-jar: executing: [/tmp/protoc4225362482353215326.exe, 
-I/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore,
 
--java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/target/generated-sources,
 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto]
ANTLR Parser Generator  Version 3.5.2
protoc-jar: executing: [/tmp/protoc3830939544524509439.exe, --version]
libprotoc 2.5.0
ANTLR Parser Generator  Version 3.5.2
Output file 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-server/target/generated-sources/org/apache/hadoop/hive/metastore/parser/FilterParser.java
 does not exist: must build 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/parser/Filter.g
org/apache/hadoop/hive/metastore/parser/Filter.g
[ERROR] Failed to execute goal on project hive-common: Could not resolve 
dependencies for project org.apache.hive:hive-common:jar:4.0.0-SNAPSHOT: Could 
not find artifact org.apache.orc:orc-core:jar:1.5.8rc0 in central 
(https://repo.maven.apache.org/maven2) -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:

[jira] [Commented] (HIVE-22524) CommandProcessorException should utilize standard Exception fields

2019-11-21 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979887#comment-16979887
 ] 

Hive QA commented on HIVE-22524:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12986435/HIVE-22524.01.patch

{color:green}SUCCESS:{color} +1 due to 14 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17715 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19539/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19539/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19539/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12986435 - PreCommit-HIVE-Build

> CommandProcessorException should utilize standard Exception fields
> --
>
> Key: HIVE-22524
> URL: https://issues.apache.org/jira/browse/HIVE-22524
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-22524.01.patch
>
>
> CommandProcessorException right now has:
> * getCause() inherited from Exception
> * getException() local implementation
> * getMessage() inherited from Exception
> * getErrorMessage() local implementation



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22505) ClassCastException caused by wrong Vectorized operator selection

2019-11-21 Thread Panagiotis Garefalakis (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979885#comment-16979885
 ] 

Panagiotis Garefalakis commented on HIVE-22505:
---

Hey [~jcamachorodriguez] [~bslim] [~rzhappy]

Could you please take a look at the patch? I think you are all familiar with 
the issue – tests are (flaky) but passing and note that the whole Vectorizer 
class has style warnings so I wanted to avoid refactoring the whole file.

> ClassCastException caused by wrong Vectorized operator selection
> 
>
> Key: HIVE-22505
> URL: https://issues.apache.org/jira/browse/HIVE-22505
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Critical
> Attachments: HIVE-22505.2.patch, HIVE-22505.3.patch, 
> HIVE-22505.4.patch, HIVE-22505.5.patch, HIVE-22505.6.patch, 
> HIVE-22505.7.patch, HIVE-22505.patch, query_error.out, 
> query_vector_explain.out, vectorized_join.q
>
>
> VectorMapJoinOuterFilteredOperator does not currently support full outer 
> joins but using the current Vectorizer logic it can be selected when a there 
> is a filter involved. This can make queries fail with ClassCastException when 
> their data and metadata in the VectorMapJoinOuterFilteredOperator do not 
> match.
> The query attached demonstrates the issue and the log attached shows the 
> java.lang.ClassCastException



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22524) CommandProcessorException should utilize standard Exception fields

2019-11-21 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979874#comment-16979874
 ] 

Hive QA commented on HIVE-22524:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
52s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
35s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
16s{color} | {color:blue} ql in master has 1539 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
46s{color} | {color:blue} llap-server in master has 90 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
41s{color} | {color:blue} service in master has 49 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
31s{color} | {color:blue} cli in master has 9 extant Findbugs warnings. {color} 
|
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
39s{color} | {color:blue} hcatalog/core in master has 36 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
30s{color} | {color:blue} hcatalog/hcatalog-pig-adapter in master has 2 extant 
Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
44s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
54s{color} | {color:blue} itests/util in master has 53 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
12s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
47s{color} | {color:red} ql: The patch generated 2 new + 639 unchanged - 2 
fixed = 641 total (was 641) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  9m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-19539/dev-support/hive-personality.sh
 |
| git revision | master / 13fc651 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19539/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql llap-server service cli hcatalog/core 
hcatalog/hcatalog-pig-adapter itests/hive-unit itests/util U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19539/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Work logged] (HIVE-22512) Use direct SQL to fetch column privileges in refreshPrivileges

2019-11-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22512?focusedWorklogId=347926=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347926
 ]

ASF GitHub Bot logged work on HIVE-22512:
-

Author: ASF GitHub Bot
Created on: 22/Nov/19 05:29
Start Date: 22/Nov/19 05:29
Worklog Time Spent: 10m 
  Work Description: maheshk114 commented on pull request #847: HIVE-22512 : 
Use direct SQL to fetch column privileges in refreshPrivileges.
URL: https://github.com/apache/hive/pull/847#discussion_r349437961
 
 

 ##
 File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
 ##
 @@ -1280,6 +1284,94 @@ public ColumnStatistics getTableStats(final String 
catName, final String dbName,
 return result;
   }
 
+  public List getTableAllColumnGrants(String catName, 
String dbName,
+   String tableName, 
String authorizer) throws MetaException {
+Query query = null;
+
+// These constants should match the SELECT clause of the query.
+final int authorizerIndex = 0;
+final int columnNameIndex = 1;
+final int createTimeIndex = 2;
+final int grantOptionIndex = 3;
+final int grantorIndex = 4;
+final int grantorTypeIndex = 5;
+final int principalNameIndex = 6;
+final int principalTypeIndex = 7;
+final int privilegeIndex = 8;
+
+// Retrieve the privileges from the object store. Just grab only the 
required fields.
+String queryText = "select " +
+TBL_COL_PRIVS + ".\"AUTHORIZER\", " +
+TBL_COL_PRIVS + ".\"COLUMN_NAME\", " +
+TBL_COL_PRIVS + ".\"CREATE_TIME\", " +
+TBL_COL_PRIVS + ".\"GRANT_OPTION\", " +
+TBL_COL_PRIVS + ".\"GRANTOR\", " +
 
 Review comment:
   i am not sure if the optimizer will change it to an inner join internally as 
there is a foreign key constraint from TBL_COL_PRIVS to TBLS ..but usually 
outer joins are costlier than inner joins.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 347926)
Time Spent: 1.5h  (was: 1h 20m)

> Use direct SQL to fetch column privileges in refreshPrivileges
> --
>
> Key: HIVE-22512
> URL: https://issues.apache.org/jira/browse/HIVE-22512
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22512.01.patch, HIVE-22512.02.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> refreshPrivileges() calls listTableAllColumnGrants() to fetch the column 
> level privileges. The later function retrieves the individual column objects 
> by firing one query per column privilege object, thus causing the backend db 
> to be swamped by these queries when PrivilegeSynchronizer is run. 
> PrivilegeSynchronizer synchronizes privileges of all the databases, tables 
> and columns and thus the backend db can get swamped really bad when there are 
> thousands of tables with hundreds of columns.
> The output of listTableAllColumnGrants() is not used completely so all the 
> columns the PM has tried to retrieves anyway goes waste.
> Fix this by using direct SQL to fetch column privileges.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22512) Use direct SQL to fetch column privileges in refreshPrivileges

2019-11-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22512?focusedWorklogId=347923=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347923
 ]

ASF GitHub Bot logged work on HIVE-22512:
-

Author: ASF GitHub Bot
Created on: 22/Nov/19 05:15
Start Date: 22/Nov/19 05:15
Worklog Time Spent: 10m 
  Work Description: ashutosh-bapat commented on pull request #847: 
HIVE-22512 : Use direct SQL to fetch column privileges in refreshPrivileges.
URL: https://github.com/apache/hive/pull/847#discussion_r349435783
 
 

 ##
 File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
 ##
 @@ -1280,6 +1284,94 @@ public ColumnStatistics getTableStats(final String 
catName, final String dbName,
 return result;
   }
 
+  public List getTableAllColumnGrants(String catName, 
String dbName,
+   String tableName, 
String authorizer) throws MetaException {
+Query query = null;
+
+// These constants should match the SELECT clause of the query.
+final int authorizerIndex = 0;
+final int columnNameIndex = 1;
+final int createTimeIndex = 2;
+final int grantOptionIndex = 3;
+final int grantorIndex = 4;
+final int grantorTypeIndex = 5;
+final int principalNameIndex = 6;
+final int principalTypeIndex = 7;
+final int privilegeIndex = 8;
+
+// Retrieve the privileges from the object store. Just grab only the 
required fields.
+String queryText = "select " +
+TBL_COL_PRIVS + ".\"AUTHORIZER\", " +
+TBL_COL_PRIVS + ".\"COLUMN_NAME\", " +
+TBL_COL_PRIVS + ".\"CREATE_TIME\", " +
+TBL_COL_PRIVS + ".\"GRANT_OPTION\", " +
+TBL_COL_PRIVS + ".\"GRANTOR\", " +
+TBL_COL_PRIVS + ".\"GRANTOR_TYPE\", " +
+TBL_COL_PRIVS + ".\"PRINCIPAL_NAME\", " +
+TBL_COL_PRIVS + ".\"PRINCIPAL_TYPE\", " +
+TBL_COL_PRIVS + ".\"TBL_COL_PRIV\", " +
+TBL_COL_PRIVS + ".\"TBL_COLUMN_GRANT_ID\" " +
+"FROM " + TBL_COL_PRIVS + " LEFT OUTER JOIN " + TBLS +
+" ON " + TBL_COL_PRIVS + ".\"TBL_ID\" = " + TBLS + ".\"TBL_ID\"" +
+" LEFT OUTER JOIN " + DBS + " ON " + TBLS + ".\"DB_ID\" = " + DBS 
+ ".\"DB_ID\" " +
+" WHERE " + TBLS + ".\"TBL_NAME\" = ?" +
+" AND " + DBS + ".\"NAME\" = ?" +
+" AND " + DBS + ".\"CTLG_NAME\" = ?";
+
+// Build the parameters, they should match the WHERE clause of the query.
+int numParams = authorizer != null ? 4 : 3;
+Object[] params = new Object[numParams];
+params[0] = tableName;
+params[1] = dbName;
+params[2] = catName;
+if (authorizer != null) {
+  queryText = queryText + " AND " + TBL_COL_PRIVS + ".\"AUTHORIZER\" = ?";
+  params[3] = authorizer;
+}
+
+// Collect the results into a list that the caller can consume.
+List result = new ArrayList<>();
+try {
+  final boolean doTrace = LOG.isDebugEnabled();
+  long start = doTrace ? System.nanoTime() : 0;
+  query = pm.newQuery("javax.jdo.query.SQL", queryText);
 
 Review comment:
   Good suggestion. Done.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 347923)
Time Spent: 1h 20m  (was: 1h 10m)

> Use direct SQL to fetch column privileges in refreshPrivileges
> --
>
> Key: HIVE-22512
> URL: https://issues.apache.org/jira/browse/HIVE-22512
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22512.01.patch, HIVE-22512.02.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> refreshPrivileges() calls listTableAllColumnGrants() to fetch the column 
> level privileges. The later function retrieves the individual column objects 
> by firing one query per column privilege object, thus causing the backend db 
> to be swamped by these queries when PrivilegeSynchronizer is run. 
> PrivilegeSynchronizer synchronizes privileges of all the databases, tables 
> and columns and thus the backend db can get swamped really bad when there are 
> thousands of tables with hundreds of columns.
> The output of listTableAllColumnGrants() is not used completely so all the 
> columns the PM has 

[jira] [Work logged] (HIVE-22512) Use direct SQL to fetch column privileges in refreshPrivileges

2019-11-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22512?focusedWorklogId=347916=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347916
 ]

ASF GitHub Bot logged work on HIVE-22512:
-

Author: ASF GitHub Bot
Created on: 22/Nov/19 04:47
Start Date: 22/Nov/19 04:47
Worklog Time Spent: 10m 
  Work Description: ashutosh-bapat commented on pull request #847: 
HIVE-22512 : Use direct SQL to fetch column privileges in refreshPrivileges.
URL: https://github.com/apache/hive/pull/847#discussion_r349432059
 
 

 ##
 File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
 ##
 @@ -1280,6 +1284,94 @@ public ColumnStatistics getTableStats(final String 
catName, final String dbName,
 return result;
   }
 
+  public List getTableAllColumnGrants(String catName, 
String dbName,
+   String tableName, 
String authorizer) throws MetaException {
+Query query = null;
+
+// These constants should match the SELECT clause of the query.
+final int authorizerIndex = 0;
+final int columnNameIndex = 1;
+final int createTimeIndex = 2;
+final int grantOptionIndex = 3;
+final int grantorIndex = 4;
+final int grantorTypeIndex = 5;
+final int principalNameIndex = 6;
+final int principalTypeIndex = 7;
+final int privilegeIndex = 8;
+
+// Retrieve the privileges from the object store. Just grab only the 
required fields.
+String queryText = "select " +
+TBL_COL_PRIVS + ".\"AUTHORIZER\", " +
 
 Review comment:
   I have tested it with Derby and PostgreSQL.
   
   We are using standard SQL so we shouldn't require to test it with other DBs. 
Also, there isn't an easy way to test it with other DBs, unless we set up 
cluster with respective DB and test it.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 347916)
Time Spent: 1h 10m  (was: 1h)

> Use direct SQL to fetch column privileges in refreshPrivileges
> --
>
> Key: HIVE-22512
> URL: https://issues.apache.org/jira/browse/HIVE-22512
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22512.01.patch, HIVE-22512.02.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> refreshPrivileges() calls listTableAllColumnGrants() to fetch the column 
> level privileges. The later function retrieves the individual column objects 
> by firing one query per column privilege object, thus causing the backend db 
> to be swamped by these queries when PrivilegeSynchronizer is run. 
> PrivilegeSynchronizer synchronizes privileges of all the databases, tables 
> and columns and thus the backend db can get swamped really bad when there are 
> thousands of tables with hundreds of columns.
> The output of listTableAllColumnGrants() is not used completely so all the 
> columns the PM has tried to retrieves anyway goes waste.
> Fix this by using direct SQL to fetch column privileges.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22512) Use direct SQL to fetch column privileges in refreshPrivileges

2019-11-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22512?focusedWorklogId=347914=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347914
 ]

ASF GitHub Bot logged work on HIVE-22512:
-

Author: ASF GitHub Bot
Created on: 22/Nov/19 04:46
Start Date: 22/Nov/19 04:46
Worklog Time Spent: 10m 
  Work Description: ashutosh-bapat commented on pull request #847: 
HIVE-22512 : Use direct SQL to fetch column privileges in refreshPrivileges.
URL: https://github.com/apache/hive/pull/847#discussion_r349431816
 
 

 ##
 File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
 ##
 @@ -1280,6 +1284,94 @@ public ColumnStatistics getTableStats(final String 
catName, final String dbName,
 return result;
   }
 
+  public List getTableAllColumnGrants(String catName, 
String dbName,
+   String tableName, 
String authorizer) throws MetaException {
+Query query = null;
+
+// These constants should match the SELECT clause of the query.
+final int authorizerIndex = 0;
+final int columnNameIndex = 1;
+final int createTimeIndex = 2;
+final int grantOptionIndex = 3;
+final int grantorIndex = 4;
+final int grantorTypeIndex = 5;
+final int principalNameIndex = 6;
+final int principalTypeIndex = 7;
+final int privilegeIndex = 8;
+
+// Retrieve the privileges from the object store. Just grab only the 
required fields.
+String queryText = "select " +
+TBL_COL_PRIVS + ".\"AUTHORIZER\", " +
+TBL_COL_PRIVS + ".\"COLUMN_NAME\", " +
+TBL_COL_PRIVS + ".\"CREATE_TIME\", " +
+TBL_COL_PRIVS + ".\"GRANT_OPTION\", " +
+TBL_COL_PRIVS + ".\"GRANTOR\", " +
+TBL_COL_PRIVS + ".\"GRANTOR_TYPE\", " +
+TBL_COL_PRIVS + ".\"PRINCIPAL_NAME\", " +
+TBL_COL_PRIVS + ".\"PRINCIPAL_TYPE\", " +
+TBL_COL_PRIVS + ".\"TBL_COL_PRIV\", " +
+TBL_COL_PRIVS + ".\"TBL_COLUMN_GRANT_ID\" " +
 
 Review comment:
   Done.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 347914)
Time Spent: 1h  (was: 50m)

> Use direct SQL to fetch column privileges in refreshPrivileges
> --
>
> Key: HIVE-22512
> URL: https://issues.apache.org/jira/browse/HIVE-22512
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22512.01.patch, HIVE-22512.02.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> refreshPrivileges() calls listTableAllColumnGrants() to fetch the column 
> level privileges. The later function retrieves the individual column objects 
> by firing one query per column privilege object, thus causing the backend db 
> to be swamped by these queries when PrivilegeSynchronizer is run. 
> PrivilegeSynchronizer synchronizes privileges of all the databases, tables 
> and columns and thus the backend db can get swamped really bad when there are 
> thousands of tables with hundreds of columns.
> The output of listTableAllColumnGrants() is not used completely so all the 
> columns the PM has tried to retrieves anyway goes waste.
> Fix this by using direct SQL to fetch column privileges.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22512) Use direct SQL to fetch column privileges in refreshPrivileges

2019-11-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22512?focusedWorklogId=347911=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347911
 ]

ASF GitHub Bot logged work on HIVE-22512:
-

Author: ASF GitHub Bot
Created on: 22/Nov/19 04:44
Start Date: 22/Nov/19 04:44
Worklog Time Spent: 10m 
  Work Description: ashutosh-bapat commented on pull request #847: 
HIVE-22512 : Use direct SQL to fetch column privileges in refreshPrivileges.
URL: https://github.com/apache/hive/pull/847#discussion_r349431537
 
 

 ##
 File path: 
standalone-metastore/metastore-server/src/main/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java
 ##
 @@ -1280,6 +1284,94 @@ public ColumnStatistics getTableStats(final String 
catName, final String dbName,
 return result;
   }
 
+  public List getTableAllColumnGrants(String catName, 
String dbName,
+   String tableName, 
String authorizer) throws MetaException {
+Query query = null;
+
+// These constants should match the SELECT clause of the query.
+final int authorizerIndex = 0;
+final int columnNameIndex = 1;
+final int createTimeIndex = 2;
+final int grantOptionIndex = 3;
+final int grantorIndex = 4;
+final int grantorTypeIndex = 5;
+final int principalNameIndex = 6;
+final int principalTypeIndex = 7;
+final int privilegeIndex = 8;
+
+// Retrieve the privileges from the object store. Just grab only the 
required fields.
+String queryText = "select " +
+TBL_COL_PRIVS + ".\"AUTHORIZER\", " +
+TBL_COL_PRIVS + ".\"COLUMN_NAME\", " +
+TBL_COL_PRIVS + ".\"CREATE_TIME\", " +
+TBL_COL_PRIVS + ".\"GRANT_OPTION\", " +
+TBL_COL_PRIVS + ".\"GRANTOR\", " +
 
 Review comment:
   JDO uses LEFT OUTER JOIN In the corresponding query fired through JDO. So, I 
used just that. But I think LEFT OUTER JOIN doesn't have any effect here 
because of the WHERE clause at the end. The WHERE clauses are on the RIGHT side 
columns. These clauses will evaluate to false when the RIGHT side columns will 
be NULL, effectively turning this OUTER join into an INNER join.
   
   It's better to leave LEFT JOIN as is in case someone wants to compare this 
query with JDO query. But if you think we should use INNER JOIN here, I will 
use INNER JOIN.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 347911)
Time Spent: 50m  (was: 40m)

> Use direct SQL to fetch column privileges in refreshPrivileges
> --
>
> Key: HIVE-22512
> URL: https://issues.apache.org/jira/browse/HIVE-22512
> Project: Hive
>  Issue Type: Improvement
>  Components: Metastore
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Ashutosh Bapat
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22512.01.patch, HIVE-22512.02.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> refreshPrivileges() calls listTableAllColumnGrants() to fetch the column 
> level privileges. The later function retrieves the individual column objects 
> by firing one query per column privilege object, thus causing the backend db 
> to be swamped by these queries when PrivilegeSynchronizer is run. 
> PrivilegeSynchronizer synchronizes privileges of all the databases, tables 
> and columns and thus the backend db can get swamped really bad when there are 
> thousands of tables with hundreds of columns.
> The output of listTableAllColumnGrants() is not used completely so all the 
> columns the PM has tried to retrieves anyway goes waste.
> Fix this by using direct SQL to fetch column privileges.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22505) ClassCastException caused by wrong Vectorized operator selection

2019-11-21 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979841#comment-16979841
 ] 

Hive QA commented on HIVE-22505:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12986433/HIVE-22505.7.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17717 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19538/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19538/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19538/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12986433 - PreCommit-HIVE-Build

> ClassCastException caused by wrong Vectorized operator selection
> 
>
> Key: HIVE-22505
> URL: https://issues.apache.org/jira/browse/HIVE-22505
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Critical
> Attachments: HIVE-22505.2.patch, HIVE-22505.3.patch, 
> HIVE-22505.4.patch, HIVE-22505.5.patch, HIVE-22505.6.patch, 
> HIVE-22505.7.patch, HIVE-22505.patch, query_error.out, 
> query_vector_explain.out, vectorized_join.q
>
>
> VectorMapJoinOuterFilteredOperator does not currently support full outer 
> joins but using the current Vectorizer logic it can be selected when a there 
> is a filter involved. This can make queries fail with ClassCastException when 
> their data and metadata in the VectorMapJoinOuterFilteredOperator do not 
> match.
> The query attached demonstrates the issue and the log attached shows the 
> java.lang.ClassCastException



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22527) Hive on Tez : Job of merging samll files will be submitted into another queue (default queue)

2019-11-21 Thread zhangbutao (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangbutao updated HIVE-22527:
--
Description: 
Hive on Tez. We enable small file merge configuration with set 
*hive.merge.tezfiles=true*. So , There will be another job launched for merging 
files after sql job. However, the merge file job is submitted into another yarn 
queue, not the queue of current beeline client session. It seems that the 
merging files job start a new tez session with new conf which is different the 
current session conf, leading to the merging file job goes into default queue.

 

Attachment *hive logs.png* shows that current session queue is 
*root.bdoc.production* ( String queueName = session.getQueueName();) incoming 
queue name is *null* ( String confQueueName = 
conf.get(TezConfiguration.TEZ_QUEUE_NAME);). In fact, we log in to the same 
beeline client with *set tez.queue.name=* *root.bdoc.production,* and  all  
jobs should be submitted into the same queue including file merge job.

[https://github.com/apache/hive/blob/bcc7df95824831a8d2f1524e4048dfc23ab98c19/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionPoolManager.java#L445]

[https://github.com/apache/hive/blob/bcc7df95824831a8d2f1524e4048dfc23ab98c19/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionPoolManager.java#L446]

 

Attachment *explain with merge files.png* shows that ** the stage-4 is 
individual merge file job which is submitted into another yarn queue(default 
queue), not the queue root.bdoc.production.

  was:
Hive on Tez. We enable small file merge configuration with set 
*hive.merge.tezfiles=true*. So , There will be another job launched for merging 
files after sql job. However, the merge file job is submitted into another yarn 
queue, not the queue of current beeline client session. It seems that the 
merging files job start a new tez session with new conf which is different the 
current session conf, leading to the merging file job goes into default queue.

 

Attachment *hive logs.png* shows that current session queue is 
*root.bdoc.production* ( String queueName = session.getQueueName();) incoming 
queue name is *null* ( String confQueueName = 
conf.get(TezConfiguration.TEZ_QUEUE_NAME);). In fact, we log in to the same 
beeline client with *set tez.queue.name=* *root.bdoc.production,* and  all  
jobs should be submitted into the queue including file merge job.

[https://github.com/apache/hive/blob/bcc7df95824831a8d2f1524e4048dfc23ab98c19/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionPoolManager.java#L445]

[https://github.com/apache/hive/blob/bcc7df95824831a8d2f1524e4048dfc23ab98c19/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionPoolManager.java#L446]

 

Attachment *explain with merge files.png* shows that ** the stage-4 is 
individual merge file job which is submitted into another yarn queue(default 
queue), not the queue root.bdoc.production.


> Hive on Tez : Job of merging samll files will be submitted into another queue 
> (default queue)
> -
>
> Key: HIVE-22527
> URL: https://issues.apache.org/jira/browse/HIVE-22527
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.1.1
>Reporter: zhangbutao
>Priority: Blocker
> Attachments: explain with merge files.png, file merge job.png, hive 
> logs.png
>
>
> Hive on Tez. We enable small file merge configuration with set 
> *hive.merge.tezfiles=true*. So , There will be another job launched for 
> merging files after sql job. However, the merge file job is submitted into 
> another yarn queue, not the queue of current beeline client session. It seems 
> that the merging files job start a new tez session with new conf which is 
> different the current session conf, leading to the merging file job goes into 
> default queue.
>  
> Attachment *hive logs.png* shows that current session queue is 
> *root.bdoc.production* ( String queueName = session.getQueueName();) incoming 
> queue name is *null* ( String confQueueName = 
> conf.get(TezConfiguration.TEZ_QUEUE_NAME);). In fact, we log in to the same 
> beeline client with *set tez.queue.name=* *root.bdoc.production,* and  all  
> jobs should be submitted into the same queue including file merge job.
> [https://github.com/apache/hive/blob/bcc7df95824831a8d2f1524e4048dfc23ab98c19/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionPoolManager.java#L445]
> [https://github.com/apache/hive/blob/bcc7df95824831a8d2f1524e4048dfc23ab98c19/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionPoolManager.java#L446]
>  
> Attachment *explain with merge files.png* shows that ** the stage-4 is 
> individual merge file job which is submitted into another yarn queue(default 
> queue), not the queue root.bdoc.production.



--
This 

[jira] [Updated] (HIVE-22527) Hive on Tez : Job of merging samll files will be submitted into another queue (default queue)

2019-11-21 Thread zhangbutao (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangbutao updated HIVE-22527:
--
Attachment: file merge job.png

> Hive on Tez : Job of merging samll files will be submitted into another queue 
> (default queue)
> -
>
> Key: HIVE-22527
> URL: https://issues.apache.org/jira/browse/HIVE-22527
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.1.1
>Reporter: zhangbutao
>Priority: Blocker
> Attachments: explain with merge files.png, file merge job.png, hive 
> logs.png
>
>
> Hive on Tez. We enable small file merge configuration with set 
> *hive.merge.tezfiles=true*. So , There will be another job launched for 
> merging files after sql job. However, the merge file job is submitted into 
> another yarn queue, not the queue of current beeline client session. It seems 
> that the merging files job start a new tez session with new conf which is 
> different the current session conf, leading to the merging file job goes into 
> default queue.
>  
> Attachment *hive logs.png* shows that current session queue is 
> *root.bdoc.production* ( String queueName = session.getQueueName();) incoming 
> queue name is *null* ( String confQueueName = 
> conf.get(TezConfiguration.TEZ_QUEUE_NAME);). In fact, we log in to the same 
> beeline client with *set tez.queue.name=* *root.bdoc.production,* and  all  
> jobs should be submitted into the queue including file merge job.
> [https://github.com/apache/hive/blob/bcc7df95824831a8d2f1524e4048dfc23ab98c19/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionPoolManager.java#L445]
> [https://github.com/apache/hive/blob/bcc7df95824831a8d2f1524e4048dfc23ab98c19/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionPoolManager.java#L446]
>  
> Attachment *explain with merge files.png* shows that ** the stage-4 is 
> individual merge file job which is submitted into another yarn queue(default 
> queue), not the queue root.bdoc.production.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22527) Hive on Tez : Job of merging samll files will be submitted into another queue (default queue)

2019-11-21 Thread zhangbutao (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangbutao updated HIVE-22527:
--
Description: 
Hive on Tez. We enable small file merge configuration with set 
*hive.merge.tezfiles=true*. So , There will be another job launched for merging 
files after sql job. However, the merge file job is submitted into another yarn 
queue, not the queue of current beeline client session. It seems that the 
merging files job start a new tez session with new conf which is different the 
current session conf, leading to the merging file job goes into default queue.

 

Attachment *hive logs.png* shows that current session queue is 
*root.bdoc.production* ( String queueName = session.getQueueName();) incoming 
queue name is *null* ( String confQueueName = 
conf.get(TezConfiguration.TEZ_QUEUE_NAME);). In fact, we log in to the same 
beeline client with *set tez.queue.name=* *root.bdoc.production,* and  all  
jobs should be submitted into the queue including file merge job.

[https://github.com/apache/hive/blob/bcc7df95824831a8d2f1524e4048dfc23ab98c19/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionPoolManager.java#L445]

[https://github.com/apache/hive/blob/bcc7df95824831a8d2f1524e4048dfc23ab98c19/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionPoolManager.java#L446]

 

Attachment *explain with merge files.png* shows that ** the stage-4 is 
individual merge file job which is submitted into another yarn queue(default 
queue), not the queue root.bdoc.production.

  was:
Hive on Tez. We enable small file merge configuration with set 
*hive.merge.tezfiles=true*. So , There will be another job launched for merging 
files after sql job. However, the merge file job is submitted into another yarn 
queue, not the queue of current beeline client session. It seems that the 
merging files job start a new tez session with new conf which is different the 
current session conf, leading to the merging file job goes into default queue.

 

Attachment *hive logs.png* shows that current session queue is 
*root.bdoc.production* ( String queueName = session.getQueueName();) incoming 
queue name is *null* ( String confQueueName = 
conf.get(TezConfiguration.TEZ_QUEUE_NAME);). In fact, we log in to the same 
beeline client with *set tez.queue.name=* *root.bdoc.production,* and ** all ** 
job should be submitted into the queue 


> Hive on Tez : Job of merging samll files will be submitted into another queue 
> (default queue)
> -
>
> Key: HIVE-22527
> URL: https://issues.apache.org/jira/browse/HIVE-22527
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.1.1
>Reporter: zhangbutao
>Priority: Blocker
> Attachments: explain with merge files.png, hive logs.png
>
>
> Hive on Tez. We enable small file merge configuration with set 
> *hive.merge.tezfiles=true*. So , There will be another job launched for 
> merging files after sql job. However, the merge file job is submitted into 
> another yarn queue, not the queue of current beeline client session. It seems 
> that the merging files job start a new tez session with new conf which is 
> different the current session conf, leading to the merging file job goes into 
> default queue.
>  
> Attachment *hive logs.png* shows that current session queue is 
> *root.bdoc.production* ( String queueName = session.getQueueName();) incoming 
> queue name is *null* ( String confQueueName = 
> conf.get(TezConfiguration.TEZ_QUEUE_NAME);). In fact, we log in to the same 
> beeline client with *set tez.queue.name=* *root.bdoc.production,* and  all  
> jobs should be submitted into the queue including file merge job.
> [https://github.com/apache/hive/blob/bcc7df95824831a8d2f1524e4048dfc23ab98c19/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionPoolManager.java#L445]
> [https://github.com/apache/hive/blob/bcc7df95824831a8d2f1524e4048dfc23ab98c19/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionPoolManager.java#L446]
>  
> Attachment *explain with merge files.png* shows that ** the stage-4 is 
> individual merge file job which is submitted into another yarn queue(default 
> queue), not the queue root.bdoc.production.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22527) Hive on Tez : Job of merging samll files will be submitted into another queue (default queue)

2019-11-21 Thread zhangbutao (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangbutao updated HIVE-22527:
--
Description: 
Hive on Tez. We enable small file merge configuration with set 
*hive.merge.tezfiles=true*. So , There will be another job launched for merging 
files after sql job. However, the merge file job is submitted into another yarn 
queue, not the queue of current beeline client session. It seems that the 
merging files job start a new tez session with new conf which is different the 
current session conf, leading to the merging file job goes into default queue.

 

Attachment *hive logs.png* shows that current session queue is 
*root.bdoc.production* ( String queueName = session.getQueueName();) incoming 
queue name is *null* ( String confQueueName = 
conf.get(TezConfiguration.TEZ_QUEUE_NAME);). In fact, we log in to the same 
beeline client with *set tez.queue.name=* *root.bdoc.production,* and ** all ** 
job should be submitted into the queue 

  was:
Hive on Tez. We enable small file merge configuration with set 
*hive.merge.tezfiles=true*. So , There will be another job launched for merging 
files after sql job. However, the merge file job is submitted into another yarn 
queue, not the queue of current beeline client session. It seems that the 
merging files job start a new tez session with new conf which is different the 
current session conf, leading to the merging file job goes into default queue.

 

Attachment *hive logs.png* shows that current session queue is 
*root.bdoc.production*   String queueName = session.getQueueName();

 


> Hive on Tez : Job of merging samll files will be submitted into another queue 
> (default queue)
> -
>
> Key: HIVE-22527
> URL: https://issues.apache.org/jira/browse/HIVE-22527
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.1.1
>Reporter: zhangbutao
>Priority: Blocker
> Attachments: explain with merge files.png, hive logs.png
>
>
> Hive on Tez. We enable small file merge configuration with set 
> *hive.merge.tezfiles=true*. So , There will be another job launched for 
> merging files after sql job. However, the merge file job is submitted into 
> another yarn queue, not the queue of current beeline client session. It seems 
> that the merging files job start a new tez session with new conf which is 
> different the current session conf, leading to the merging file job goes into 
> default queue.
>  
> Attachment *hive logs.png* shows that current session queue is 
> *root.bdoc.production* ( String queueName = session.getQueueName();) incoming 
> queue name is *null* ( String confQueueName = 
> conf.get(TezConfiguration.TEZ_QUEUE_NAME);). In fact, we log in to the same 
> beeline client with *set tez.queue.name=* *root.bdoc.production,* and ** all 
> ** job should be submitted into the queue 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22527) Hive on Tez : Job of merging samll files will be submitted into another queue (default queue)

2019-11-21 Thread zhangbutao (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangbutao updated HIVE-22527:
--
Description: 
Hive on Tez. We enable small file merge configuration with set 
*hive.merge.tezfiles=true*. So , There will be another job launched for merging 
files after sql job. However, the merge file job is submitted into another yarn 
queue, not the queue of current beeline client session. It seems that the 
merging files job start a new tez session with new conf which is different the 
current session conf, leading to the merging file job goes into default queue.

 

Attachment *hive logs.png* shows that current session queue is 
*root.bdoc.production*   String queueName = session.getQueueName();

 

  was:
Hive on Tez. We enable small file merge configuration with set 
*hive.merge.tezfiles=true*. So , There will be another job launched for merging 
files after sql job. However, the merge file job is submitted into another yarn 
queue, not the queue of current beeline client session. It seems that the 
merging files job start a new tez session with new conf which is different the 
current session conf, leading to the merging file job goes into default queue.

 


> Hive on Tez : Job of merging samll files will be submitted into another queue 
> (default queue)
> -
>
> Key: HIVE-22527
> URL: https://issues.apache.org/jira/browse/HIVE-22527
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.1.1
>Reporter: zhangbutao
>Priority: Blocker
> Attachments: explain with merge files.png, hive logs.png
>
>
> Hive on Tez. We enable small file merge configuration with set 
> *hive.merge.tezfiles=true*. So , There will be another job launched for 
> merging files after sql job. However, the merge file job is submitted into 
> another yarn queue, not the queue of current beeline client session. It seems 
> that the merging files job start a new tez session with new conf which is 
> different the current session conf, leading to the merging file job goes into 
> default queue.
>  
> Attachment *hive logs.png* shows that current session queue is 
> *root.bdoc.production*   String queueName = session.getQueueName();
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22505) ClassCastException caused by wrong Vectorized operator selection

2019-11-21 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979814#comment-16979814
 ] 

Hive QA commented on HIVE-22505:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
59s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
16s{color} | {color:blue} ql in master has 1539 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
47s{color} | {color:red} ql: The patch generated 6 new + 397 unchanged - 0 
fixed = 403 total (was 397) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-19538/dev-support/hive-personality.sh
 |
| git revision | master / 13fc651 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19538/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql itests U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19538/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> ClassCastException caused by wrong Vectorized operator selection
> 
>
> Key: HIVE-22505
> URL: https://issues.apache.org/jira/browse/HIVE-22505
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Critical
> Attachments: HIVE-22505.2.patch, HIVE-22505.3.patch, 
> HIVE-22505.4.patch, HIVE-22505.5.patch, HIVE-22505.6.patch, 
> HIVE-22505.7.patch, HIVE-22505.patch, query_error.out, 
> query_vector_explain.out, vectorized_join.q
>
>
> VectorMapJoinOuterFilteredOperator does not currently support full outer 
> joins but using the current Vectorizer logic it can be selected when a there 
> is a filter involved. This can make queries fail with ClassCastException when 
> their data and metadata in the VectorMapJoinOuterFilteredOperator do not 
> match.
> The query attached demonstrates the issue and the log attached shows the 
> java.lang.ClassCastException



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22527) Hive on Tez : Job of merging samll files will be submitted into another queue (default queue)

2019-11-21 Thread zhangbutao (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangbutao updated HIVE-22527:
--
Attachment: explain with merge files.png

> Hive on Tez : Job of merging samll files will be submitted into another queue 
> (default queue)
> -
>
> Key: HIVE-22527
> URL: https://issues.apache.org/jira/browse/HIVE-22527
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.1.1
>Reporter: zhangbutao
>Priority: Blocker
> Attachments: explain with merge files.png, hive logs.png
>
>
> Hive on Tez. We enable small file merge configuration with set 
> *hive.merge.tezfiles=true*. So , There will be another job launched for 
> merging files after sql job. However, the merge file job is submitted into 
> another yarn queue, not the queue of current beeline client session. It seems 
> that the merging files job start a new tez session with new conf which is 
> different the current session conf, leading to the merging file job goes into 
> default queue.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22527) Hive on Tez : Job of merging samll files will be submitted into another queue (default queue)

2019-11-21 Thread zhangbutao (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangbutao updated HIVE-22527:
--
Attachment: hive logs.png

> Hive on Tez : Job of merging samll files will be submitted into another queue 
> (default queue)
> -
>
> Key: HIVE-22527
> URL: https://issues.apache.org/jira/browse/HIVE-22527
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.1.1
>Reporter: zhangbutao
>Priority: Blocker
> Attachments: hive logs.png
>
>
> Hive on Tez. We enable small file merge configuration with set 
> *hive.merge.tezfiles=true*. So , There will be another job launched for 
> merging files after sql job. However, the merge file job is submitted into 
> another yarn queue, not the queue of current beeline client session. It seems 
> that the merging files job start a new tez session with new conf which is 
> different the current session conf, leading to the merging file job goes into 
> default queue.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22527) Hive on Tez : Job of merging samll files will be submitted into another queue (default queue)

2019-11-21 Thread zhangbutao (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangbutao updated HIVE-22527:
--
Description: 
Hive on Tez. We enable small file merge configuration with set 
*hive.merge.tezfiles=true*. So , There will be another job launched for merging 
files after sql job. However, the merge file job is submitted into another yarn 
queue, not the queue of current beeline client session. It seems that the 
merging files job start a new tez session with new conf which is different the 
current session conf, leading to the merging file job goes into default queue.

 

> Hive on Tez : Job of merging samll files will be submitted into another queue 
> (default queue)
> -
>
> Key: HIVE-22527
> URL: https://issues.apache.org/jira/browse/HIVE-22527
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.1.1
>Reporter: zhangbutao
>Priority: Blocker
>
> Hive on Tez. We enable small file merge configuration with set 
> *hive.merge.tezfiles=true*. So , There will be another job launched for 
> merging files after sql job. However, the merge file job is submitted into 
> another yarn queue, not the queue of current beeline client session. It seems 
> that the merging files job start a new tez session with new conf which is 
> different the current session conf, leading to the merging file job goes into 
> default queue.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22516) TestScheduledQueryIntegration fails occasionally

2019-11-21 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979801#comment-16979801
 ] 

Hive QA commented on HIVE-22516:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12986429/HIVE-22516.01.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17715 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb_schq] 
(batchId=177)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19537/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19537/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19537/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12986429 - PreCommit-HIVE-Build

> TestScheduledQueryIntegration fails occasionally
> 
>
> Key: HIVE-22516
> URL: https://issues.apache.org/jira/browse/HIVE-22516
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-22516.01.patch
>
>
> failure seems to be caused by some filesystem level operation:
> {code}
> Failed
> org.apache.hadoop.hive.schq.TestScheduledQueryIntegration.testScheduledQueryExecutionImpersonation
> Failing for the past 2 builds (Since Failed#19506 )
> Took 21 sec.
> Error Message
> java.io.IOException: ExitCodeException exitCode=1: chmod: cannot access 
> ‘/home/hiveptest/35.224.52.88-hiveptest-0/apache-github-source-source/target/tmp/junit9072291964634791171/scratchdir/hiveptest/_tez_session_dir/d1aa15eb-d23c-4248-b509-0b29c456a1cd/.tez/application_1574237195383_0001_wd/localmode-log-dir’:
>  No such file or directory
> Stacktrace
> java.lang.RuntimeException: 
> java.io.IOException: ExitCodeException exitCode=1: chmod: cannot access 
> ‘/home/hiveptest/35.224.52.88-hiveptest-0/apache-github-source-source/target/tmp/junit9072291964634791171/scratchdir/hiveptest/_tez_session_dir/d1aa15eb-d23c-4248-b509-0b29c456a1cd/.tez/application_1574237195383_0001_wd/localmode-log-dir’:
>  No such file or directory
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:701)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:606)
>   at 
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:586)
>   at 
> org.apache.hadoop.hive.schq.TestScheduledQueryIntegration.createDriver(TestScheduledQueryIntegration.java:164)
>   at 
> org.apache.hadoop.hive.schq.TestScheduledQueryIntegration.runAsUser(TestScheduledQueryIntegration.java:132)
>   at 
> org.apache.hadoop.hive.schq.TestScheduledQueryIntegration.testScheduledQueryExecutionImpersonation(TestScheduledQueryIntegration.java:115)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at 
> 

[jira] [Commented] (HIVE-22516) TestScheduledQueryIntegration fails occasionally

2019-11-21 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979780#comment-16979780
 ] 

Hive QA commented on HIVE-22516:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
2s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
12s{color} | {color:blue} ql in master has 1539 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
44s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
32s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
43s{color} | {color:red} ql: The patch generated 1 new + 3 unchanged - 0 fixed 
= 4 total (was 3) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
20s{color} | {color:green} ql generated 0 new + 1538 unchanged - 1 fixed = 1538 
total (was 1539) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
50s{color} | {color:green} hive-unit in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-19537/dev-support/hive-personality.sh
 |
| git revision | master / 13fc651 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19537/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19537/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> TestScheduledQueryIntegration fails occasionally
> 
>
> Key: HIVE-22516
> URL: https://issues.apache.org/jira/browse/HIVE-22516
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-22516.01.patch
>
>
> failure seems to be caused by some filesystem level operation:
> {code}
> Failed
> org.apache.hadoop.hive.schq.TestScheduledQueryIntegration.testScheduledQueryExecutionImpersonation
> Failing for the past 2 builds (Since Failed#19506 )
> Took 21 sec.
> Error Message
> java.io.IOException: ExitCodeException exitCode=1: chmod: cannot access 
> 

[jira] [Commented] (HIVE-22317) Beeline-site parser does not handle the variable substitution correctly

2019-11-21 Thread Rajkumar Singh (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979777#comment-16979777
 ] 

Rajkumar Singh commented on HIVE-22317:
---

[~maheshk114] I m not sure why patch apply is failing but I have created the 
pull request [https://github.com/apache/hive/pull/849] which looks clean, can 
you please review that help me to merge it.

> Beeline-site parser does not handle the variable substitution correctly
> ---
>
> Key: HIVE-22317
> URL: https://issues.apache.org/jira/browse/HIVE-22317
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 4.0.0
> Environment: Hive-4.0.0
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-22317.01.patch, HIVE-22317.patch
>
>
> beeline-site.xml
> {code:java}
> http://www.w3.org/2001/XInclude;>
>  
>  
>  beeline.hs2.jdbc.url.container
>  
> jdbc:hive2://c3220-node2.host.com:2181,c3220-node3.host.com:2181,c3220-node4.host.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2
>  
>  
>  
>  beeline.hs2.jdbc.url.default
>  test
>  
>  
> beeline.hs2.jdbc.url.test
> ${beeline.hs2.jdbc.url.container}?tez.queue.name=myqueue
>  
>  
>  beeline.hs2.jdbc.url.llap
>  
> jdbc:hive2://c3220-node2.host.com:2181,c3220-node3.host.com:2181,c3220-node4.host.com:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-interactive
>  
>  
>  {code}
> beeline fail to connect because it does not parse the substituted value 
> correctly
> {code:java}
> beeline
> Error in parsing jdbc url: 
> ${beeline.hs2.jdbc.url.container}?tez.queue.name=myqueue from beeline-site.xml
> beeline>  {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22483) Vectorize UDF datetime_legacy_hybrid_calendar

2019-11-21 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979763#comment-16979763
 ] 

Hive QA commented on HIVE-22483:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12986426/HIVE-22483.05.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17717 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniDruidKafkaCliDriver.testCliDriver[druidkafkamini_avro]
 (batchId=300)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19536/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19536/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19536/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12986426 - PreCommit-HIVE-Build

> Vectorize UDF datetime_legacy_hybrid_calendar
> -
>
> Key: HIVE-22483
> URL: https://issues.apache.org/jira/browse/HIVE-22483
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22483.01.patch, HIVE-22483.02.patch, 
> HIVE-22483.03.patch, HIVE-22483.04.patch, HIVE-22483.04.patch, 
> HIVE-22483.04.patch, HIVE-22483.05.patch, HIVE-22483.05.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-15078) Flaky dummy

2019-11-21 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-15078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-15078:

Attachment: some.file.patch

> Flaky dummy
> ---
>
> Key: HIVE-15078
> URL: https://issues.apache.org/jira/browse/HIVE-15078
> Project: Hive
>  Issue Type: Test
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-15078.01-branch-3.patch, HIVE-15078.1.patch, 
> HIVE-15078.1.patch, some.file.patch
>
>
> I think it would be intresting to see what will happen if all currently known 
> flaky test would be ignored...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22521) Both Driver and SessionState has a userName

2019-11-21 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-22521:

Attachment: HIVE-22521.01.patch

> Both Driver and SessionState has a userName
> ---
>
> Key: HIVE-22521
> URL: https://issues.apache.org/jira/browse/HIVE-22521
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-22521.01.patch, HIVE-22521.01.patch
>
>
> This caused some confusing behaviour to me...especially when the 2 values 
> were different.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-15078) Flaky dummy

2019-11-21 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-15078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-15078:

Attachment: (was: some.file.patch)

> Flaky dummy
> ---
>
> Key: HIVE-15078
> URL: https://issues.apache.org/jira/browse/HIVE-15078
> Project: Hive
>  Issue Type: Test
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-15078.01-branch-3.patch, HIVE-15078.1.patch, 
> HIVE-15078.1.patch, some.file.patch
>
>
> I think it would be intresting to see what will happen if all currently known 
> flaky test would be ignored...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-15078) Flaky dummy

2019-11-21 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-15078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-15078:

Attachment: (was: some.file.patch)

> Flaky dummy
> ---
>
> Key: HIVE-15078
> URL: https://issues.apache.org/jira/browse/HIVE-15078
> Project: Hive
>  Issue Type: Test
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-15078.01-branch-3.patch, HIVE-15078.1.patch, 
> HIVE-15078.1.patch, some.file.patch
>
>
> I think it would be intresting to see what will happen if all currently known 
> flaky test would be ignored...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-15078) Flaky dummy

2019-11-21 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-15078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-15078:

Attachment: some.file.patch

> Flaky dummy
> ---
>
> Key: HIVE-15078
> URL: https://issues.apache.org/jira/browse/HIVE-15078
> Project: Hive
>  Issue Type: Test
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-15078.01-branch-3.patch, HIVE-15078.1.patch, 
> HIVE-15078.1.patch, some.file.patch, some.file.patch
>
>
> I think it would be intresting to see what will happen if all currently known 
> flaky test would be ignored...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22463) Support Decimal64 column multiplication with decimal64 Column/Scalar

2019-11-21 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-22463:

Attachment: HIVE-22463.9.patch
Status: Patch Available  (was: Open)

> Support Decimal64 column multiplication with decimal64 Column/Scalar
> 
>
> Key: HIVE-22463
> URL: https://issues.apache.org/jira/browse/HIVE-22463
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22463.1.patch, HIVE-22463.2.patch, 
> HIVE-22463.3.patch, HIVE-22463.5.patch, HIVE-22463.6.patch, 
> HIVE-22463.7.patch, HIVE-22463.8.patch, HIVE-22463.9.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Support Decimal64 column multiplication with decimal64 Column/Scalar



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22463) Support Decimal64 column multiplication with decimal64 Column/Scalar

2019-11-21 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-22463:

Status: Open  (was: Patch Available)

> Support Decimal64 column multiplication with decimal64 Column/Scalar
> 
>
> Key: HIVE-22463
> URL: https://issues.apache.org/jira/browse/HIVE-22463
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22463.1.patch, HIVE-22463.2.patch, 
> HIVE-22463.3.patch, HIVE-22463.5.patch, HIVE-22463.6.patch, 
> HIVE-22463.7.patch, HIVE-22463.8.patch, HIVE-22463.9.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Support Decimal64 column multiplication with decimal64 Column/Scalar



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22483) Vectorize UDF datetime_legacy_hybrid_calendar

2019-11-21 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979734#comment-16979734
 ] 

Hive QA commented on HIVE-22483:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
24s{color} | {color:blue} ql in master has 1539 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} ql: The patch generated 0 new + 33 unchanged - 4 
fixed = 33 total (was 37) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-19536/dev-support/hive-personality.sh
 |
| git revision | master / 13fc651 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19536/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Vectorize UDF datetime_legacy_hybrid_calendar
> -
>
> Key: HIVE-22483
> URL: https://issues.apache.org/jira/browse/HIVE-22483
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22483.01.patch, HIVE-22483.02.patch, 
> HIVE-22483.03.patch, HIVE-22483.04.patch, HIVE-22483.04.patch, 
> HIVE-22483.04.patch, HIVE-22483.05.patch, HIVE-22483.05.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22514) HiveProtoLoggingHook might consume lots of memory

2019-11-21 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979720#comment-16979720
 ] 

Hive QA commented on HIVE-22514:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12986459/HIVE-22514.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17715 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19535/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19535/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19535/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12986459 - PreCommit-HIVE-Build

> HiveProtoLoggingHook might consume lots of memory
> -
>
> Key: HIVE-22514
> URL: https://issues.apache.org/jira/browse/HIVE-22514
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22514.1.patch, HIVE-22514.2.patch, Screen Shot 
> 2019-11-18 at 2.19.24 PM.png
>
>
> HiveProtoLoggingHook uses a ScheduledThreadPoolExecutor to submit writer 
> tasks and to periodically handle rollover. The builtin 
> ScheduledThreadPoolExecutor uses a unbounded queue which cannot be replaced 
> from the outside. If log events are generated at a very fast rate this queue 
> can grow large.
> !Screen Shot 2019-11-18 at 2.19.24 PM.png|width=650,height=101!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22514) HiveProtoLoggingHook might consume lots of memory

2019-11-21 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979694#comment-16979694
 ] 

Hive QA commented on HIVE-22514:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
2s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 1s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
39s{color} | {color:blue} common in master has 65 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
30s{color} | {color:blue} ql in master has 1539 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
26s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
44s{color} | {color:red} ql: The patch generated 2 new + 9 unchanged - 0 fixed 
= 11 total (was 9) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-19535/dev-support/hive-personality.sh
 |
| git revision | master / 13fc651 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19535/yetus/diff-checkstyle-ql.txt
 |
| modules | C: common ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19535/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> HiveProtoLoggingHook might consume lots of memory
> -
>
> Key: HIVE-22514
> URL: https://issues.apache.org/jira/browse/HIVE-22514
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22514.1.patch, HIVE-22514.2.patch, Screen Shot 
> 2019-11-18 at 2.19.24 PM.png
>
>
> HiveProtoLoggingHook uses a ScheduledThreadPoolExecutor to submit writer 
> tasks and to periodically handle rollover. The builtin 
> ScheduledThreadPoolExecutor uses a unbounded queue which cannot be replaced 
> from the outside. If log events are generated at a very fast rate this queue 
> can grow large.
> !Screen Shot 2019-11-18 at 2.19.24 PM.png|width=650,height=101!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22521) Both Driver and SessionState has a userName

2019-11-21 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979674#comment-16979674
 ] 

Hive QA commented on HIVE-22521:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12986425/HIVE-22521.01.patch

{color:green}SUCCESS:{color} +1 due to 9 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17715 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb_schq] 
(batchId=177)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19534/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19534/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19534/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12986425 - PreCommit-HIVE-Build

> Both Driver and SessionState has a userName
> ---
>
> Key: HIVE-22521
> URL: https://issues.apache.org/jira/browse/HIVE-22521
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-22521.01.patch
>
>
> This caused some confusing behaviour to me...especially when the 2 values 
> were different.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22486) Send only accessed columns for masking policies request

2019-11-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22486?focusedWorklogId=347759=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347759
 ]

ASF GitHub Bot logged work on HIVE-22486:
-

Author: ASF GitHub Bot
Created on: 21/Nov/19 22:41
Start Date: 21/Nov/19 22:41
Worklog Time Spent: 10m 
  Work Description: jcamachor commented on pull request #848: HIVE-22486
URL: https://github.com/apache/hive/pull/848#discussion_r349354851
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveRelFieldTrimmer.java
 ##
 @@ -674,10 +677,14 @@ public TrimResult trimFields(Project project, 
ImmutableBitSet fieldsUsed,
 // set columnAccessInfo for ViewColumnAuthorization
 for (Ord ord : Ord.zip(project.getProjects())) {
   if (fieldsUsed.get(ord.i)) {
-if (this.columnAccessInfo != null && this.viewProjectToTableSchema != 
null
-&& this.viewProjectToTableSchema.containsKey(project)) {
+if (this.viewProjectToTableSchema != null && 
this.viewProjectToTableSchema.containsKey(project)) {
   Table tab = this.viewProjectToTableSchema.get(project);
-  this.columnAccessInfo.add(tab.getCompleteName(), 
tab.getAllCols().get(ord.i).getName());
+  if (this.directColumnAccessInfo != null) {
+this.directColumnAccessInfo.add(tab.getCompleteName(), 
tab.getAllCols().get(ord.i).getName());
+  }
+  if (this.allColumnAccessInfo != null) {
+this.allColumnAccessInfo.add(tab.getCompleteName(), 
tab.getAllCols().get(ord.i).getName());
+  }
 
 Review comment:
   I have used a ColumnAccess internal object to represent this. Now it is 
cleaner because it uses a single ds. However, I kept the current APIs in 
ColumnAccessInfo so I do not have to make any changes all over for the time 
being. Can you take another look? Thanks
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 347759)
Time Spent: 40m  (was: 0.5h)

> Send only accessed columns for masking policies request
> ---
>
> Key: HIVE-22486
> URL: https://issues.apache.org/jira/browse/HIVE-22486
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Affects Versions: 4.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22486.01.patch, HIVE-22486.02.patch, 
> HIVE-22486.03.patch, HIVE-22486.05.patch, HIVE-22486.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently, we send all columns for masking request, even if they are not 
> accessed by the given query. We could send only those columns for which the 
> masking policy will be necessary.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22486) Send only accessed columns for masking policies request

2019-11-21 Thread Jesus Camacho Rodriguez (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-22486:
---
Attachment: (was: HIVE-22486.04.patch)

> Send only accessed columns for masking policies request
> ---
>
> Key: HIVE-22486
> URL: https://issues.apache.org/jira/browse/HIVE-22486
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Affects Versions: 4.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22486.01.patch, HIVE-22486.02.patch, 
> HIVE-22486.03.patch, HIVE-22486.05.patch, HIVE-22486.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently, we send all columns for masking request, even if they are not 
> accessed by the given query. We could send only those columns for which the 
> masking policy will be necessary.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22486) Send only accessed columns for masking policies request

2019-11-21 Thread Jesus Camacho Rodriguez (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-22486:
---
Attachment: HIVE-22486.05.patch

> Send only accessed columns for masking policies request
> ---
>
> Key: HIVE-22486
> URL: https://issues.apache.org/jira/browse/HIVE-22486
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Affects Versions: 4.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22486.01.patch, HIVE-22486.02.patch, 
> HIVE-22486.03.patch, HIVE-22486.04.patch, HIVE-22486.05.patch, 
> HIVE-22486.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently, we send all columns for masking request, even if they are not 
> accessed by the given query. We could send only those columns for which the 
> masking policy will be necessary.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22486) Send only accessed columns for masking policies request

2019-11-21 Thread Jesus Camacho Rodriguez (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-22486:
---
Attachment: HIVE-22486.04.patch

> Send only accessed columns for masking policies request
> ---
>
> Key: HIVE-22486
> URL: https://issues.apache.org/jira/browse/HIVE-22486
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Affects Versions: 4.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22486.01.patch, HIVE-22486.02.patch, 
> HIVE-22486.03.patch, HIVE-22486.04.patch, HIVE-22486.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently, we send all columns for masking request, even if they are not 
> accessed by the given query. We could send only those columns for which the 
> masking policy will be necessary.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22521) Both Driver and SessionState has a userName

2019-11-21 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979649#comment-16979649
 ] 

Hive QA commented on HIVE-22521:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
56s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
28s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
31s{color} | {color:blue} ql in master has 1539 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
43s{color} | {color:blue} service in master has 49 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
48s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
56s{color} | {color:green} ql: The patch generated 0 new + 1132 unchanged - 2 
fixed = 1132 total (was 1134) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} The patch service passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} The patch hive-unit passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-19534/dev-support/hive-personality.sh
 |
| git revision | master / 13fc651 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql service itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19534/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Both Driver and SessionState has a userName
> ---
>
> Key: HIVE-22521
> URL: https://issues.apache.org/jira/browse/HIVE-22521
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-22521.01.patch
>
>
> This caused some confusing behaviour to me...especially when the 2 values 
> were different.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22517) Sysdb related qtests also output the sysdb sql commands to q.out

2019-11-21 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-22517:

Attachment: HIVE-22517.02.patch

> Sysdb related qtests also output the sysdb sql commands to q.out
> 
>
> Key: HIVE-22517
> URL: https://issues.apache.org/jira/browse/HIVE-22517
> Project: Hive
>  Issue Type: Improvement
>  Components: Test
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-22517.01.patch, HIVE-22517.02.patch, 
> HIVE-22517.02.patch
>
>
> it would be better to not have it on the outputs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21737) Upgrade Avro to version 1.9.1

2019-11-21 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979621#comment-16979621
 ] 

Hive QA commented on HIVE-21737:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12986422/0001-HIVE-21737-Bump-Apache-Avro-to-1.9.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 17715 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_decimal_old] 
(batchId=52)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_deserialize_map_null]
 (batchId=41)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_map_null] 
(batchId=94)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb_schq] 
(batchId=177)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[avro_decimal] 
(batchId=103)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[avro_decimal_native]
 (batchId=127)
org.apache.hadoop.hive.schq.TestScheduledQueryIntegration.testScheduledQueryExecutionImpersonation
 (batchId=279)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=284)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19533/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19533/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19533/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 8 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12986422 - PreCommit-HIVE-Build

> Upgrade Avro to version 1.9.1
> -
>
> Key: HIVE-21737
> URL: https://issues.apache.org/jira/browse/HIVE-21737
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Ismaël Mejía
>Assignee: Fokko Driesprong
>Priority: Major
>  Labels: pull-request-available
> Attachments: 0001-HIVE-21737-Bump-Apache-Avro-to-1.9.1.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Avro 1.9.0 was released recently. It brings a lot of fixes including a leaner 
> version of Avro without Jackson in the public API. Worth the update.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21737) Upgrade Avro to version 1.9.1

2019-11-21 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979620#comment-16979620
 ] 

Hive QA commented on HIVE-21737:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
4s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
21s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
43s{color} | {color:blue} serde in master has 198 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
22s{color} | {color:blue} ql in master has 1539 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  9m 
26s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
 3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  9m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  findbugs  
checkstyle  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-19533/dev-support/hive-personality.sh
 |
| git revision | master / 13fc651 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: serde ql . U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19533/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Upgrade Avro to version 1.9.1
> -
>
> Key: HIVE-21737
> URL: https://issues.apache.org/jira/browse/HIVE-21737
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Ismaël Mejía
>Assignee: Fokko Driesprong
>Priority: Major
>  Labels: pull-request-available
> Attachments: 0001-HIVE-21737-Bump-Apache-Avro-to-1.9.1.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Avro 1.9.0 was released recently. It brings a lot of fixes including a leaner 
> version of Avro without Jackson in the public API. Worth the update.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22506) Read-only transactions feature flag

2019-11-21 Thread Denys Kuzmenko (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko updated HIVE-22506:
--
Attachment: HIVE-22506.3.patch

> Read-only transactions feature flag
> ---
>
> Key: HIVE-22506
> URL: https://issues.apache.org/jira/browse/HIVE-22506
> Project: Hive
>  Issue Type: Improvement
>  Components: Transactions
>Reporter: Denys Kuzmenko
>Assignee: Denys Kuzmenko
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22506.1.patch, HIVE-22506.2.patch, 
> HIVE-22506.3.patch
>
>
> Introduce a feature flag, so that read-only transaction functionality could 
> be conditionally turned on/off. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22514) HiveProtoLoggingHook might consume lots of memory

2019-11-21 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-22514:
-
Status: Patch Available  (was: Open)

> HiveProtoLoggingHook might consume lots of memory
> -
>
> Key: HIVE-22514
> URL: https://issues.apache.org/jira/browse/HIVE-22514
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22514.1.patch, HIVE-22514.2.patch, Screen Shot 
> 2019-11-18 at 2.19.24 PM.png
>
>
> HiveProtoLoggingHook uses a ScheduledThreadPoolExecutor to submit writer 
> tasks and to periodically handle rollover. The builtin 
> ScheduledThreadPoolExecutor uses a unbounded queue which cannot be replaced 
> from the outside. If log events are generated at a very fast rate this queue 
> can grow large.
> !Screen Shot 2019-11-18 at 2.19.24 PM.png|width=650,height=101!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22514) HiveProtoLoggingHook might consume lots of memory

2019-11-21 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-22514:
-
Status: Open  (was: Patch Available)

> HiveProtoLoggingHook might consume lots of memory
> -
>
> Key: HIVE-22514
> URL: https://issues.apache.org/jira/browse/HIVE-22514
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22514.1.patch, HIVE-22514.2.patch, Screen Shot 
> 2019-11-18 at 2.19.24 PM.png
>
>
> HiveProtoLoggingHook uses a ScheduledThreadPoolExecutor to submit writer 
> tasks and to periodically handle rollover. The builtin 
> ScheduledThreadPoolExecutor uses a unbounded queue which cannot be replaced 
> from the outside. If log events are generated at a very fast rate this queue 
> can grow large.
> !Screen Shot 2019-11-18 at 2.19.24 PM.png|width=650,height=101!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22514) HiveProtoLoggingHook might consume lots of memory

2019-11-21 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-22514:
-
Attachment: (was: HIVE-22514.2.patch)

> HiveProtoLoggingHook might consume lots of memory
> -
>
> Key: HIVE-22514
> URL: https://issues.apache.org/jira/browse/HIVE-22514
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22514.1.patch, HIVE-22514.2.patch, Screen Shot 
> 2019-11-18 at 2.19.24 PM.png
>
>
> HiveProtoLoggingHook uses a ScheduledThreadPoolExecutor to submit writer 
> tasks and to periodically handle rollover. The builtin 
> ScheduledThreadPoolExecutor uses a unbounded queue which cannot be replaced 
> from the outside. If log events are generated at a very fast rate this queue 
> can grow large.
> !Screen Shot 2019-11-18 at 2.19.24 PM.png|width=650,height=101!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22514) HiveProtoLoggingHook might consume lots of memory

2019-11-21 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HIVE-22514:
-
Attachment: HIVE-22514.2.patch

> HiveProtoLoggingHook might consume lots of memory
> -
>
> Key: HIVE-22514
> URL: https://issues.apache.org/jira/browse/HIVE-22514
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 4.0.0
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22514.1.patch, HIVE-22514.2.patch, Screen Shot 
> 2019-11-18 at 2.19.24 PM.png
>
>
> HiveProtoLoggingHook uses a ScheduledThreadPoolExecutor to submit writer 
> tasks and to periodically handle rollover. The builtin 
> ScheduledThreadPoolExecutor uses a unbounded queue which cannot be replaced 
> from the outside. If log events are generated at a very fast rate this queue 
> can grow large.
> !Screen Shot 2019-11-18 at 2.19.24 PM.png|width=650,height=101!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22523) The error handler in LlapRecordReader might block if its queue is full

2019-11-21 Thread Attila Magyar (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979580#comment-16979580
 ] 

Attila Magyar commented on HIVE-22523:
--

[~bslim] I think they're probably set but not visible. Based on a heap dump it 
looked like the error handling was only partially executed like it was stuck at 
some point. This is a point where it looks like it's possible to stuck. Even if 
it doesn't solve the original problem it still looks like a potential bug.

> The error handler in LlapRecordReader might block if its queue is full
> --
>
> Key: HIVE-22523
> URL: https://issues.apache.org/jira/browse/HIVE-22523
> Project: Hive
>  Issue Type: Bug
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22523.1.patch
>
>
> In setError() we set the value of an atomic reference (pendingError) and we 
> also put the error in a queue. The latter seems not just unnecessary but it 
> might block the caller of the handler if the queue is full. Also closing of 
> the reader is might not properly handled as some of the flags are not 
> volatile.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22517) Sysdb related qtests also output the sysdb sql commands to q.out

2019-11-21 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979567#comment-16979567
 ] 

Hive QA commented on HIVE-22517:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12986419/HIVE-22517.02.patch

{color:green}SUCCESS:{color} +1 due to 6 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17709 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[hybridgrace_hashjoin_2]
 (batchId=112)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19532/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19532/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19532/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12986419 - PreCommit-HIVE-Build

> Sysdb related qtests also output the sysdb sql commands to q.out
> 
>
> Key: HIVE-22517
> URL: https://issues.apache.org/jira/browse/HIVE-22517
> Project: Hive
>  Issue Type: Improvement
>  Components: Test
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-22517.01.patch, HIVE-22517.02.patch
>
>
> it would be better to not have it on the outputs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22517) Sysdb related qtests also output the sysdb sql commands to q.out

2019-11-21 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979562#comment-16979562
 ] 

Hive QA commented on HIVE-22517:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
50s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
 8s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
18s{color} | {color:blue} ql in master has 1539 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
54s{color} | {color:blue} itests/util in master has 53 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
54s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
57s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-19532/dev-support/hive-personality.sh
 |
| git revision | master / df8e185 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19532/yetus/whitespace-eol.txt
 |
| modules | C: ql . itests/util U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19532/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Sysdb related qtests also output the sysdb sql commands to q.out
> 
>
> Key: HIVE-22517
> URL: https://issues.apache.org/jira/browse/HIVE-22517
> Project: Hive
>  Issue Type: Improvement
>  Components: Test
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-22517.01.patch, HIVE-22517.02.patch
>
>
> it would be better to not have it on the outputs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22523) The error handler in LlapRecordReader might block if its queue is full

2019-11-21 Thread Slim Bouguerra (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979550#comment-16979550
 ] 

Slim Bouguerra commented on HIVE-22523:
---

as per the code will wait for 100ms then next round should exit if one of the 
flags are set.
{code} 
 private void enqueueInternal(Object o) throws InterruptedException {
// We need to loop here to handle the case where consumer goes away.
do {} while (!isClosed && !isInterrupted && !queue.offer(o, 100, 
TimeUnit.MILLISECONDS));
  }
{code}

are you saying that in some cases the flags are not set or it is not visible to 
the thread ?

> The error handler in LlapRecordReader might block if its queue is full
> --
>
> Key: HIVE-22523
> URL: https://issues.apache.org/jira/browse/HIVE-22523
> Project: Hive
>  Issue Type: Bug
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22523.1.patch
>
>
> In setError() we set the value of an atomic reference (pendingError) and we 
> also put the error in a queue. The latter seems not just unnecessary but it 
> might block the caller of the handler if the queue is full. Also closing of 
> the reader is might not properly handled as some of the flags are not 
> volatile.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22523) The error handler in LlapRecordReader might block if its queue is full

2019-11-21 Thread Attila Magyar (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979535#comment-16979535
 ] 

Attila Magyar commented on HIVE-22523:
--

[~bslim], it tries to put the object into a queue which has a capacity limit. 
If the queue is full it will wait. It can happen that the consumer quits when 
the queue was already full. See nextCvb and the implementation of 
enqueueInternal.

> The error handler in LlapRecordReader might block if its queue is full
> --
>
> Key: HIVE-22523
> URL: https://issues.apache.org/jira/browse/HIVE-22523
> Project: Hive
>  Issue Type: Bug
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22523.1.patch
>
>
> In setError() we set the value of an atomic reference (pendingError) and we 
> also put the error in a queue. The latter seems not just unnecessary but it 
> might block the caller of the handler if the queue is full. Also closing of 
> the reader is might not properly handled as some of the flags are not 
> volatile.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22526) Extract Compiler from Driver

2019-11-21 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-22526:
--
Attachment: HIVE-22526.01.patch

> Extract Compiler from Driver
> 
>
> Key: HIVE-22526
> URL: https://issues.apache.org/jira/browse/HIVE-22526
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22526.01.patch
>
>
> The Driver class contains ~600 lines of code responsible for compiling the 
> command. That means that from the command String a Plan needs to be created, 
> and also a transaction needs to be started (in most of the cases). This is a 
> thing done by the compile function, which has a lot of sub functions to help 
> this task, while itself is also really big. All these codes should be put 
> into a separate class, where it can do it's job without getting mixed with 
> the other codes in the Driver.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22526) Extract Compiler from Driver

2019-11-21 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-22526:
--
Status: Patch Available  (was: Open)

> Extract Compiler from Driver
> 
>
> Key: HIVE-22526
> URL: https://issues.apache.org/jira/browse/HIVE-22526
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22526.01.patch
>
>
> The Driver class contains ~600 lines of code responsible for compiling the 
> command. That means that from the command String a Plan needs to be created, 
> and also a transaction needs to be started (in most of the cases). This is a 
> thing done by the compile function, which has a lot of sub functions to help 
> this task, while itself is also really big. All these codes should be put 
> into a separate class, where it can do it's job without getting mixed with 
> the other codes in the Driver.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-22526) Extract Compiler from Driver

2019-11-21 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely reassigned HIVE-22526:
-


> Extract Compiler from Driver
> 
>
> Key: HIVE-22526
> URL: https://issues.apache.org/jira/browse/HIVE-22526
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
>
> The Driver class contains ~600 lines of code responsible for compiling the 
> command. That means that from the command String a Plan needs to be created, 
> and also a transaction needs to be started (in most of the cases). This is a 
> thing done by the compile function, which has a lot of sub functions to help 
> this task, while itself is also really big. All these codes should be put 
> into a separate class, where it can do it's job without getting mixed with 
> the other codes in the Driver.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22525) Refactor HiveOpConverter

2019-11-21 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-22525:
--
Status: Patch Available  (was: Open)

> Refactor HiveOpConverter
> 
>
> Key: HIVE-22525
> URL: https://issues.apache.org/jira/browse/HIVE-22525
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22525.01.patch
>
>
> HiveOpConverter is on it's way to become a monster class. It is already ~1300 
> lines long, and expected to grow. It should be refactored, cut into multiple 
> classes in a reasonable way. It is a natural way to do this is to create 
> separate visitor classes for the different RelNodes, which are already 
> handled in different functions within HiveOpConverter. That way 
> HiveOpConverter can be the dispatcher among those visitor classes, while each 
> of them are handling some specific work, potentially requesting sub nodes to 
> be dispatched by HiveOpConverter. The functions used by multiple visitors 
> should be put into some utility class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22525) Refactor HiveOpConverter

2019-11-21 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-22525:
--
Attachment: HIVE-22525.01.patch

> Refactor HiveOpConverter
> 
>
> Key: HIVE-22525
> URL: https://issues.apache.org/jira/browse/HIVE-22525
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22525.01.patch
>
>
> HiveOpConverter is on it's way to become a monster class. It is already ~1300 
> lines long, and expected to grow. It should be refactored, cut into multiple 
> classes in a reasonable way. It is a natural way to do this is to create 
> separate visitor classes for the different RelNodes, which are already 
> handled in different functions within HiveOpConverter. That way 
> HiveOpConverter can be the dispatcher among those visitor classes, while each 
> of them are handling some specific work, potentially requesting sub nodes to 
> be dispatched by HiveOpConverter. The functions used by multiple visitors 
> should be put into some utility class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-22525) Refactor HiveOpConverter

2019-11-21 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely reassigned HIVE-22525:
-


> Refactor HiveOpConverter
> 
>
> Key: HIVE-22525
> URL: https://issues.apache.org/jira/browse/HIVE-22525
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
> Fix For: 4.0.0
>
>
> HiveOpConverter is on it's way to become a monster class. It is already ~1300 
> lines long, and expected to grow. It should be refactored, cut into multiple 
> classes in a reasonable way. It is a natural way to do this is to create 
> separate visitor classes for the different RelNodes, which are already 
> handled in different functions within HiveOpConverter. That way 
> HiveOpConverter can be the dispatcher among those visitor classes, while each 
> of them are handling some specific work, potentially requesting sub nodes to 
> be dispatched by HiveOpConverter. The functions used by multiple visitors 
> should be put into some utility class.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22369) Handle HiveTableFunctionScan at return path

2019-11-21 Thread Miklos Gergely (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miklos Gergely updated HIVE-22369:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Handle HiveTableFunctionScan at return path
> ---
>
> Key: HIVE-22369
> URL: https://issues.apache.org/jira/browse/HIVE-22369
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-22369.01.patch, HIVE-22369.02.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The 
> [optimizedOptiqPlan|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L1573]
>  at CalcitePlanner.getOptimizedHiveOPDag is ultimately generated by 
> CalcitePlanner.internalGenSelectLogicalPlan, which may either provide a 
> [HiveProject|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L4831]
>  or a 
> [HiveTableFunctionScan|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L4776].
>  When HiveCalciteUtil.getTopLevelSelect is invoked on this it is looking for 
> a 
> [HiveProject|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveCalciteUtil.java#L633]
>  node in the tree, which if won't find in case of a HiveTableFunctionScan was 
> returned. This is why TestNewGetSplitsFormat is failing with return path.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22523) The error handler in LlapRecordReader might block if its queue is full

2019-11-21 Thread Slim Bouguerra (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979516#comment-16979516
 ] 

Slim Bouguerra commented on HIVE-22523:
---

[~amagyar] {code} 
org.apache.hadoop.hive.llap.io.api.impl.LlapRecordReader#enqueueInternal{code} 
is not blocking can you please explain more what it the issue ? is it variable 
reads visibility issue ? 

> The error handler in LlapRecordReader might block if its queue is full
> --
>
> Key: HIVE-22523
> URL: https://issues.apache.org/jira/browse/HIVE-22523
> Project: Hive
>  Issue Type: Bug
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22523.1.patch
>
>
> In setError() we set the value of an atomic reference (pendingError) and we 
> also put the error in a queue. The latter seems not just unnecessary but it 
> might block the caller of the handler if the queue is full. Also closing of 
> the reader is might not properly handled as some of the flags are not 
> volatile.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22369) Handle HiveTableFunctionScan at return path

2019-11-21 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979475#comment-16979475
 ] 

Hive QA commented on HIVE-22369:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12986418/HIVE-22369.02.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17715 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19531/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19531/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19531/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12986418 - PreCommit-HIVE-Build

> Handle HiveTableFunctionScan at return path
> ---
>
> Key: HIVE-22369
> URL: https://issues.apache.org/jira/browse/HIVE-22369
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-22369.01.patch, HIVE-22369.02.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The 
> [optimizedOptiqPlan|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L1573]
>  at CalcitePlanner.getOptimizedHiveOPDag is ultimately generated by 
> CalcitePlanner.internalGenSelectLogicalPlan, which may either provide a 
> [HiveProject|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L4831]
>  or a 
> [HiveTableFunctionScan|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L4776].
>  When HiveCalciteUtil.getTopLevelSelect is invoked on this it is looking for 
> a 
> [HiveProject|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/HiveCalciteUtil.java#L633]
>  node in the tree, which if won't find in case of a HiveTableFunctionScan was 
> returned. This is why TestNewGetSplitsFormat is failing with return path.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22499) LLAP: Add an EncodedReaderOptions to extend ORC impl for options

2019-11-21 Thread Mustafa Iman (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mustafa Iman updated HIVE-22499:

Status: In Progress  (was: Patch Available)

> LLAP: Add an EncodedReaderOptions to extend ORC impl for options
> 
>
> Key: HIVE-22499
> URL: https://issues.apache.org/jira/browse/HIVE-22499
> Project: Hive
>  Issue Type: Bug
>  Components: llap, ORC
>Reporter: Gopal Vijayaraghavan
>Assignee: Mustafa Iman
>Priority: Major
> Attachments: HIVE-22499.WIP.patch, HIVE-22499.patch
>
>
> ORC-570 is an ABI change to the way getFileSystem() by adding an another 
> exception to the implementation.
> To accept and use that change requires waiting for an ORC release, while this 
> patch serves the same purpose though falls back for a retry for 
> FileSystem.get() in case the supplier fails at runtime.
> Also as a side-note, the FS.get() call is always used in the cases where the 
> file is not being read from a cache such as EncodedOrcFile (so the upstream 
> API change might be overkill).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22499) LLAP: Add an EncodedReaderOptions to extend ORC impl for options

2019-11-21 Thread Mustafa Iman (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mustafa Iman updated HIVE-22499:

Status: Patch Available  (was: In Progress)

> LLAP: Add an EncodedReaderOptions to extend ORC impl for options
> 
>
> Key: HIVE-22499
> URL: https://issues.apache.org/jira/browse/HIVE-22499
> Project: Hive
>  Issue Type: Bug
>  Components: llap, ORC
>Reporter: Gopal Vijayaraghavan
>Assignee: Mustafa Iman
>Priority: Major
> Attachments: HIVE-22499.WIP.patch, HIVE-22499.patch
>
>
> ORC-570 is an ABI change to the way getFileSystem() by adding an another 
> exception to the implementation.
> To accept and use that change requires waiting for an ORC release, while this 
> patch serves the same purpose though falls back for a retry for 
> FileSystem.get() in case the supplier fails at runtime.
> Also as a side-note, the FS.get() call is always used in the cases where the 
> file is not being read from a cache such as EncodedOrcFile (so the upstream 
> API change might be overkill).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22476) Hive datediff function provided inconsistent results when hive.fetch.task.conversion is set to none

2019-11-21 Thread Slim Bouguerra (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Slim Bouguerra updated HIVE-22476:
--
Attachment: HIVE-22476.8.patch

> Hive datediff function provided inconsistent results when 
> hive.fetch.task.conversion is set to none
> ---
>
> Key: HIVE-22476
> URL: https://issues.apache.org/jira/browse/HIVE-22476
> Project: Hive
>  Issue Type: Bug
>Reporter: Slim Bouguerra
>Assignee: Slim Bouguerra
>Priority: Major
> Attachments: HIVE-22476.2.patch, HIVE-22476.3.patch, 
> HIVE-22476.5.patch, HIVE-22476.6.patch, HIVE-22476.7.patch, 
> HIVE-22476.7.patch, HIVE-22476.8.patch
>
>
> The actual issue stems to the different date parser used by various part of 
> the engine.
> Fetch task uses udfdatediff via {code} 
> org.apache.hadoop.hive.ql.udf.generic.GenericUDFToDate{code} while the 
> vectorized llap execution uses {code}VectorUDFDateDiffScalarCol{code}.
> This fix is meant to be not very intrusive and will add more support to the 
> GenericUDFToDate by enhancing the parser.
> For the longer term will be better to use one parser for all the operators.
> Thanks [~Rajkumar Singh] for the repro example
> {code} 
> create external table testdatediff(datetimecol string) stored as orc;
> insert into testdatediff values ('2019-09-09T10:45:49+02:00'),('2019-07-24');
> select datetimecol from testdatediff where datediff(cast(current_timestamp as 
> string), datetimecol)<183;
> set hive.ferch.task.conversion=none;
> select datetimecol from testdatediff where datediff(cast(current_timestamp as 
> string), datetimecol)<183;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21266) Don't run cleaner if compaction is skipped (issue with single delta file)

2019-11-21 Thread Karen Coppage (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979440#comment-16979440
 ] 

Karen Coppage commented on HIVE-21266:
--

It's still a waste of resources to clean compaction transactions that have not 
been compacted.

Changing issue name to: Don't run cleaner if compaction is skipped (issue with 
single delta file)

> Don't run cleaner if compaction is skipped (issue with single delta file)
> -
>
> Key: HIVE-21266
> URL: https://issues.apache.org/jira/browse/HIVE-21266
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Eugene Koifman
>Assignee: Karen Coppage
>Priority: Major
>
> [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorMR.java#L353-L357]
>  
> {noformat}
> if ((deltaCount + (dir.getBaseDirectory() == null ? 0 : 1)) + origCount <= 1) 
> {
>   LOG.debug("Not compacting {}; current base is {} and there are {} 
> deltas and {} originals", sd.getLocation(), dir
>   .getBaseDirectory(), deltaCount, origCount);
>   return;
> }
>  {noformat}
> Is problematic.
> Suppose you have 1 delta file from streaming ingest: {{delta_11_20}} where 
> {{txnid:13}} was aborted.  The code above will not rewrite the delta (which 
> drops anything that belongs to the aborted txn) and transition the compaction 
> to "ready_for_cleaning" state which will drop the metadata about the aborted 
> txn in {{markCleaned()}}.  Now aborted data will come back as committed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21266) Don't run cleaner if compaction is skipped (issue with single delta file)

2019-11-21 Thread Karen Coppage (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karen Coppage updated HIVE-21266:
-
Summary: Don't run cleaner if compaction is skipped (issue with single 
delta file)  (was: Unit test for potential issue with single delta file)

> Don't run cleaner if compaction is skipped (issue with single delta file)
> -
>
> Key: HIVE-21266
> URL: https://issues.apache.org/jira/browse/HIVE-21266
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 4.0.0
>Reporter: Eugene Koifman
>Assignee: Karen Coppage
>Priority: Major
>
> [https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/txn/compactor/CompactorMR.java#L353-L357]
>  
> {noformat}
> if ((deltaCount + (dir.getBaseDirectory() == null ? 0 : 1)) + origCount <= 1) 
> {
>   LOG.debug("Not compacting {}; current base is {} and there are {} 
> deltas and {} originals", sd.getLocation(), dir
>   .getBaseDirectory(), deltaCount, origCount);
>   return;
> }
>  {noformat}
> Is problematic.
> Suppose you have 1 delta file from streaming ingest: {{delta_11_20}} where 
> {{txnid:13}} was aborted.  The code above will not rewrite the delta (which 
> drops anything that belongs to the aborted txn) and transition the compaction 
> to "ready_for_cleaning" state which will drop the metadata about the aborted 
> txn in {{markCleaned()}}.  Now aborted data will come back as committed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-21917) COMPLETED_TXN_COMPONENTS table is never cleaned up unless Compactor runs

2019-11-21 Thread Denys Kuzmenko (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-21917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Denys Kuzmenko updated HIVE-21917:
--
Attachment: HIVE-21917.5.patch

> COMPLETED_TXN_COMPONENTS table is never cleaned up unless Compactor runs
> 
>
> Key: HIVE-21917
> URL: https://issues.apache.org/jira/browse/HIVE-21917
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 3.1.0, 3.1.1
>Reporter: Craig Condit
>Assignee: Denys Kuzmenko
>Priority: Major
> Attachments: HIVE-21917.1.patch, HIVE-21917.2.patch, 
> HIVE-21917.3.patch, HIVE-21917.4.patch, HIVE-21917.5.patch
>
>
> The Initiator thread in the metastore repeatedly loops over entries in the 
> COMPLETED_TXN_COMPONENTS table to determine which partitions / tables might 
> need to be compacted. However, entries are never removed from this table 
> except by a completed Compactor run.
> In a cluster where most tables / partitions are write-once read-many, this 
> results in stale entries in this table never being cleaned up. In a small 
> test cluster, we have observed approximately 45k entries in this table 
> (virtually equal to the number of partitions in the cluster) while < 100 of 
> these tables have delta files at all. Since most of the tables will never get 
> enough writes to trigger a compaction (and in fact have only ever been 
> written to once), the initiator thread keeps trying to evaluate them on every 
> loop.
> On this test cluster, it takes approximately 10 minutes to loop through all 
> the entries and results in severe performance degradation on metastore 
> operations. With the default run timing of 5 minutes, the initiator basically 
> never stops running.
> On a production cluster with 2M partitions, this would be a non-starter.
> The initiator thread should proactively remove entries from 
> COMPLETED_TXN_COMPONENTS when it determines that a compaction is not needed, 
> so that they are not evaluated again on the next loop.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22369) Handle HiveTableFunctionScan at return path

2019-11-21 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979437#comment-16979437
 ] 

Hive QA commented on HIVE-22369:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
51s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 0s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
26s{color} | {color:blue} ql in master has 1539 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
47s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
19s{color} | {color:red} itests/hive-unit: The patch generated 1 new + 20 
unchanged - 33 fixed = 21 total (was 53) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-19531/dev-support/hive-personality.sh
 |
| git revision | master / df8e185 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19531/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| modules | C: ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19531/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Handle HiveTableFunctionScan at return path
> ---
>
> Key: HIVE-22369
> URL: https://issues.apache.org/jira/browse/HIVE-22369
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Miklos Gergely
>Assignee: Miklos Gergely
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-22369.01.patch, HIVE-22369.02.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> The 
> [optimizedOptiqPlan|https://github.com/apache/hive/blob/5c91d324f22c2ae47e234e76a9bc5ee1a71e6a70/ql/src/java/org/apache/hadoop/hive/ql/parse/CalcitePlanner.java#L1573]
>  at CalcitePlanner.getOptimizedHiveOPDag is ultimately generated by 
> CalcitePlanner.internalGenSelectLogicalPlan, which may either provide a 
> 

[jira] [Updated] (HIVE-22510) Support decimal64 operations for column operands with different scales

2019-11-21 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-22510:

Attachment: HIVE-22510.5.patch
Status: Patch Available  (was: Open)

> Support decimal64 operations for column operands with different scales
> --
>
> Key: HIVE-22510
> URL: https://issues.apache.org/jira/browse/HIVE-22510
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-22510.2.patch, HIVE-22510.3.patch, 
> HIVE-22510.4.patch, HIVE-22510.5.patch
>
>
> Right now, if the operands on the decimal64 operations are columns with 
> different scales, then we do not use the decimal64 vectorized version and 
> fall back to HiveDecimal vectorized version of the operator. In this Jira, we 
> will check if we can use decimal64 vectorized version, even if the scales are 
> different.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22510) Support decimal64 operations for column operands with different scales

2019-11-21 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-22510:

Status: Open  (was: Patch Available)

> Support decimal64 operations for column operands with different scales
> --
>
> Key: HIVE-22510
> URL: https://issues.apache.org/jira/browse/HIVE-22510
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-22510.2.patch, HIVE-22510.3.patch, 
> HIVE-22510.4.patch, HIVE-22510.5.patch
>
>
> Right now, if the operands on the decimal64 operations are columns with 
> different scales, then we do not use the decimal64 vectorized version and 
> fall back to HiveDecimal vectorized version of the operator. In this Jira, we 
> will check if we can use decimal64 vectorized version, even if the scales are 
> different.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22463) Support Decimal64 column multiplication with decimal64 Column/Scalar

2019-11-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22463?focusedWorklogId=347539=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347539
 ]

ASF GitHub Bot logged work on HIVE-22463:
-

Author: ASF GitHub Bot
Created on: 21/Nov/19 17:14
Start Date: 21/Nov/19 17:14
Worklog Time Spent: 10m 
  Work Description: t3rmin4t0r commented on pull request #846: HIVE-22463 
decimal64 multiplication
URL: https://github.com/apache/hive/pull/846#discussion_r348781288
 
 

 ##
 File path: 
ql/src/gen/vectorization/ExpressionTemplates/Decimal64ScalarMultiplyDecimal64Column.txt
 ##
 @@ -0,0 +1,219 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.exec.vector.expressions.gen;
+
+import java.util.Arrays;
+
+import org.apache.hadoop.hive.ql.exec.vector.Decimal64ColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.VectorExpressionDescriptor;
+import org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch;
+import org.apache.hadoop.hive.ql.exec.vector.expressions.Decimal64Util;
+import org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpression;
+import org.apache.hadoop.hive.serde2.io.HiveDecimalWritable;
+import org.apache.hadoop.hive.serde2.typeinfo.DecimalTypeInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+
+/**
+ * Generated from template Decimal64ScalarArithmeticDecimal64Column.txt.
 
 Review comment:
   We don't need 2 classes for this, because multiply is commutative
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 347539)
Time Spent: 50m  (was: 40m)

> Support Decimal64 column multiplication with decimal64 Column/Scalar
> 
>
> Key: HIVE-22463
> URL: https://issues.apache.org/jira/browse/HIVE-22463
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22463.1.patch, HIVE-22463.2.patch, 
> HIVE-22463.3.patch, HIVE-22463.5.patch, HIVE-22463.6.patch, 
> HIVE-22463.7.patch, HIVE-22463.8.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Support Decimal64 column multiplication with decimal64 Column/Scalar



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22463) Support Decimal64 column multiplication with decimal64 Column/Scalar

2019-11-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22463?focusedWorklogId=347538=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347538
 ]

ASF GitHub Bot logged work on HIVE-22463:
-

Author: ASF GitHub Bot
Created on: 21/Nov/19 17:14
Start Date: 21/Nov/19 17:14
Worklog Time Spent: 10m 
  Work Description: t3rmin4t0r commented on pull request #846: HIVE-22463 
decimal64 multiplication
URL: https://github.com/apache/hive/pull/846#discussion_r348773487
 
 

 ##
 File path: 
ql/src/gen/vectorization/ExpressionTemplates/Decimal64ColumnMultiplyDecimal64Scalar.txt
 ##
 @@ -0,0 +1,219 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.exec.vector.expressions.gen;
+
+import java.util.Arrays;
+
+import org.apache.hadoop.hive.ql.exec.vector.Decimal64ColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.VectorExpressionDescriptor;
+import org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch;
+import org.apache.hadoop.hive.ql.exec.vector.expressions.Decimal64Util;
+import org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpression;
+import org.apache.hadoop.hive.serde2.io.HiveDecimalWritable;
+import org.apache.hadoop.hive.serde2.typeinfo.DecimalTypeInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+
+/**
+ * Generated from template ColumnArithmeticScalar.txt, which covers decimal64 
arithmetic
+ * expressions between a column and a scalar.
+ */
+public class  extends VectorExpression {
+
+  private static final long serialVersionUID = 1L;
+
+  private final int colNum;
+  private final long value;
+
+  public (int colNum, long value, int outputColumnNum) {
+super(outputColumnNum);
+this.colNum = colNum;
+this.value = value;
+  }
+
+  public () {
+super();
+
+// Dummy final assignments.
+colNum = -1;
+value = 0;
+  }
+
+  @Override
+  public void evaluate(VectorizedRowBatch batch) throws HiveException {
+
+// return immediately if batch is empty
+final int n = batch.size;
+if (n == 0) {
+  return;
+}
+
+if (childExpressions != null) {
+  super.evaluateChildren(batch);
+}
+
+Decimal64ColumnVector inputColVector = (Decimal64ColumnVector) 
batch.cols[colNum];
+Decimal64ColumnVector outputColVector = (Decimal64ColumnVector) 
batch.cols[outputColumnNum];
+int[] sel = batch.selected;
+boolean[] inputIsNull = inputColVector.isNull;
+boolean[] outputIsNull = outputColVector.isNull;
+
+// We do not need to do a column reset since we are carefully changing the 
output.
+outputColVector.isRepeating = false;
+
+long[] vector = inputColVector.vector;
+long[] outputVector = outputColVector.vector;
+
+final long outputDecimal64AbsMax =
+HiveDecimalWritable.getDecimal64AbsMax(outputColVector.precision);
+DecimalTypeInfo lDecimalTypeInfo = (DecimalTypeInfo) inputTypeInfos[0];
+DecimalTypeInfo rDecimalTypeInfo = (DecimalTypeInfo) inputTypeInfos[1];
+HiveDecimalWritable writable = new HiveDecimalWritable();
+writable.deserialize64(value, lDecimalTypeInfo.scale() - 
rDecimalTypeInfo.scale());
 
 Review comment:
   Does this operation belong within the evaluate() - looks like several of 
these parameters do not change across evaluate() calls 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 347538)
Time Spent: 40m  (was: 0.5h)

> Support Decimal64 column multiplication with decimal64 Column/Scalar
> 
>
> Key: HIVE-22463
> URL: https://issues.apache.org/jira/browse/HIVE-22463
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
> 

[jira] [Work logged] (HIVE-22463) Support Decimal64 column multiplication with decimal64 Column/Scalar

2019-11-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22463?focusedWorklogId=347537=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347537
 ]

ASF GitHub Bot logged work on HIVE-22463:
-

Author: ASF GitHub Bot
Created on: 21/Nov/19 17:14
Start Date: 21/Nov/19 17:14
Worklog Time Spent: 10m 
  Work Description: t3rmin4t0r commented on pull request #846: HIVE-22463 
decimal64 multiplication
URL: https://github.com/apache/hive/pull/846#discussion_r348773917
 
 

 ##
 File path: 
ql/src/gen/vectorization/ExpressionTemplates/Decimal64ColumnMultiplyDecimal64Scalar.txt
 ##
 @@ -0,0 +1,219 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.exec.vector.expressions.gen;
+
+import java.util.Arrays;
+
+import org.apache.hadoop.hive.ql.exec.vector.Decimal64ColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.VectorExpressionDescriptor;
+import org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch;
+import org.apache.hadoop.hive.ql.exec.vector.expressions.Decimal64Util;
+import org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpression;
+import org.apache.hadoop.hive.serde2.io.HiveDecimalWritable;
+import org.apache.hadoop.hive.serde2.typeinfo.DecimalTypeInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+
+/**
+ * Generated from template ColumnArithmeticScalar.txt, which covers decimal64 
arithmetic
+ * expressions between a column and a scalar.
+ */
+public class  extends VectorExpression {
+
+  private static final long serialVersionUID = 1L;
+
+  private final int colNum;
+  private final long value;
+
+  public (int colNum, long value, int outputColumnNum) {
+super(outputColumnNum);
+this.colNum = colNum;
+this.value = value;
 
 Review comment:
   This is a better place to compute most of the constant checks within the 
evaluate()
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 347537)
Time Spent: 0.5h  (was: 20m)

> Support Decimal64 column multiplication with decimal64 Column/Scalar
> 
>
> Key: HIVE-22463
> URL: https://issues.apache.org/jira/browse/HIVE-22463
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22463.1.patch, HIVE-22463.2.patch, 
> HIVE-22463.3.patch, HIVE-22463.5.patch, HIVE-22463.6.patch, 
> HIVE-22463.7.patch, HIVE-22463.8.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Support Decimal64 column multiplication with decimal64 Column/Scalar



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22463) Support Decimal64 column multiplication with decimal64 Column/Scalar

2019-11-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22463?focusedWorklogId=347536=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347536
 ]

ASF GitHub Bot logged work on HIVE-22463:
-

Author: ASF GitHub Bot
Created on: 21/Nov/19 17:14
Start Date: 21/Nov/19 17:14
Worklog Time Spent: 10m 
  Work Description: t3rmin4t0r commented on pull request #846: HIVE-22463 
decimal64 multiplication
URL: https://github.com/apache/hive/pull/846#discussion_r348781170
 
 

 ##
 File path: 
ql/src/gen/vectorization/ExpressionTemplates/Decimal64ColumnMultiplyDecimal64Scalar.txt
 ##
 @@ -0,0 +1,219 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hive.ql.exec.vector.expressions.gen;
+
+import java.util.Arrays;
+
+import org.apache.hadoop.hive.ql.exec.vector.Decimal64ColumnVector;
+import org.apache.hadoop.hive.ql.exec.vector.VectorExpressionDescriptor;
+import org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch;
+import org.apache.hadoop.hive.ql.exec.vector.expressions.Decimal64Util;
+import org.apache.hadoop.hive.ql.exec.vector.expressions.VectorExpression;
+import org.apache.hadoop.hive.serde2.io.HiveDecimalWritable;
+import org.apache.hadoop.hive.serde2.typeinfo.DecimalTypeInfo;
+import org.apache.hadoop.hive.ql.metadata.HiveException;
+
+/**
+ * Generated from template ColumnArithmeticScalar.txt, which covers decimal64 
arithmetic
+ * expressions between a column and a scalar.
+ */
+public class  extends VectorExpression {
+
+  private static final long serialVersionUID = 1L;
+
+  private final int colNum;
+  private final long value;
+
+  public (int colNum, long value, int outputColumnNum) {
+super(outputColumnNum);
+this.colNum = colNum;
+this.value = value;
+  }
+
+  public () {
+super();
+
+// Dummy final assignments.
+colNum = -1;
+value = 0;
+  }
+
+  @Override
+  public void evaluate(VectorizedRowBatch batch) throws HiveException {
+
+// return immediately if batch is empty
+final int n = batch.size;
+if (n == 0) {
+  return;
+}
+
+if (childExpressions != null) {
+  super.evaluateChildren(batch);
+}
+
+Decimal64ColumnVector inputColVector = (Decimal64ColumnVector) 
batch.cols[colNum];
+Decimal64ColumnVector outputColVector = (Decimal64ColumnVector) 
batch.cols[outputColumnNum];
+int[] sel = batch.selected;
+boolean[] inputIsNull = inputColVector.isNull;
+boolean[] outputIsNull = outputColVector.isNull;
+
+// We do not need to do a column reset since we are carefully changing the 
output.
+outputColVector.isRepeating = false;
+
+long[] vector = inputColVector.vector;
+long[] outputVector = outputColVector.vector;
+
+final long outputDecimal64AbsMax =
+HiveDecimalWritable.getDecimal64AbsMax(outputColVector.precision);
+DecimalTypeInfo lDecimalTypeInfo = (DecimalTypeInfo) inputTypeInfos[0];
+DecimalTypeInfo rDecimalTypeInfo = (DecimalTypeInfo) inputTypeInfos[1];
+HiveDecimalWritable writable = new HiveDecimalWritable();
+writable.deserialize64(value, lDecimalTypeInfo.scale() - 
rDecimalTypeInfo.scale());
 
 Review comment:
   Also I'm confused by what it actually does for the output result scaling here
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 347536)
Time Spent: 20m  (was: 10m)

> Support Decimal64 column multiplication with decimal64 Column/Scalar
> 
>
> Key: HIVE-22463
> URL: https://issues.apache.org/jira/browse/HIVE-22463
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
>  Labels: 

[jira] [Work logged] (HIVE-22463) Support Decimal64 column multiplication with decimal64 Column/Scalar

2019-11-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22463?focusedWorklogId=347540=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347540
 ]

ASF GitHub Bot logged work on HIVE-22463:
-

Author: ASF GitHub Bot
Created on: 21/Nov/19 17:14
Start Date: 21/Nov/19 17:14
Worklog Time Spent: 10m 
  Work Description: t3rmin4t0r commented on pull request #846: HIVE-22463 
decimal64 multiplication
URL: https://github.com/apache/hive/pull/846#discussion_r348781677
 
 

 ##
 File path: 
ql/src/test/results/clientpositive/vector_decimal64_mul_decimal64column.q.out
 ##
 @@ -0,0 +1,142 @@
+PREHOOK: query: create external table 
vector_decimal64_mul_decimal64column(ss_ext_list_price decimal(7,2), 
ss_ext_wholesale_cost decimal(7,2), ss_ext_discount_amt decimal(7,2), 
ss_ext_sales_price decimal(7,2)) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' 
LINES TERMINATED BY '\n' STORED AS TEXTFILE
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@vector_decimal64_mul_decimal64column
+POSTHOOK: query: create external table 
vector_decimal64_mul_decimal64column(ss_ext_list_price decimal(7,2), 
ss_ext_wholesale_cost decimal(7,2), ss_ext_discount_amt decimal(7,2), 
ss_ext_sales_price decimal(7,2)) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' 
LINES TERMINATED BY '\n' STORED AS TEXTFILE
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@vector_decimal64_mul_decimal64column
+PREHOOK: query: LOAD DATA LOCAL INPATH '../../data/files/decimal64table.csv' 
OVERWRITE INTO TABLE vector_decimal64_mul_decimal64column
+PREHOOK: type: LOAD
+ A masked pattern was here 
+PREHOOK: Output: default@vector_decimal64_mul_decimal64column
+POSTHOOK: query: LOAD DATA LOCAL INPATH '../../data/files/decimal64table.csv' 
OVERWRITE INTO TABLE vector_decimal64_mul_decimal64column
+POSTHOOK: type: LOAD
+ A masked pattern was here 
+POSTHOOK: Output: default@vector_decimal64_mul_decimal64column
+PREHOOK: query: create table 
vector_decimal64_mul_decimal64column_tmp(ss_ext_list_price decimal(7,2), 
ss_ext_wholesale_cost decimal(7,2), ss_ext_discount_amt decimal(7,2), 
ss_ext_sales_price decimal(7,2)) stored as ORC
+PREHOOK: type: CREATETABLE
+PREHOOK: Output: database:default
+PREHOOK: Output: default@vector_decimal64_mul_decimal64column_tmp
+POSTHOOK: query: create table 
vector_decimal64_mul_decimal64column_tmp(ss_ext_list_price decimal(7,2), 
ss_ext_wholesale_cost decimal(7,2), ss_ext_discount_amt decimal(7,2), 
ss_ext_sales_price decimal(7,2)) stored as ORC
+POSTHOOK: type: CREATETABLE
+POSTHOOK: Output: database:default
+POSTHOOK: Output: default@vector_decimal64_mul_decimal64column_tmp
+PREHOOK: query: insert into table vector_decimal64_mul_decimal64column_tmp 
select * from vector_decimal64_mul_decimal64column
+PREHOOK: type: QUERY
+PREHOOK: Input: default@vector_decimal64_mul_decimal64column
+PREHOOK: Output: default@vector_decimal64_mul_decimal64column_tmp
+POSTHOOK: query: insert into table vector_decimal64_mul_decimal64column_tmp 
select * from vector_decimal64_mul_decimal64column
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@vector_decimal64_mul_decimal64column
+POSTHOOK: Output: default@vector_decimal64_mul_decimal64column_tmp
+POSTHOOK: Lineage: 
vector_decimal64_mul_decimal64column_tmp.ss_ext_discount_amt SIMPLE 
[(vector_decimal64_mul_decimal64column)vector_decimal64_mul_decimal64column.FieldSchema(name:ss_ext_discount_amt,
 type:decimal(7,2), comment:null), ]
+POSTHOOK: Lineage: vector_decimal64_mul_decimal64column_tmp.ss_ext_list_price 
SIMPLE 
[(vector_decimal64_mul_decimal64column)vector_decimal64_mul_decimal64column.FieldSchema(name:ss_ext_list_price,
 type:decimal(7,2), comment:null), ]
+POSTHOOK: Lineage: vector_decimal64_mul_decimal64column_tmp.ss_ext_sales_price 
SIMPLE 
[(vector_decimal64_mul_decimal64column)vector_decimal64_mul_decimal64column.FieldSchema(name:ss_ext_sales_price,
 type:decimal(7,2), comment:null), ]
+POSTHOOK: Lineage: 
vector_decimal64_mul_decimal64column_tmp.ss_ext_wholesale_cost SIMPLE 
[(vector_decimal64_mul_decimal64column)vector_decimal64_mul_decimal64column.FieldSchema(name:ss_ext_wholesale_cost,
 type:decimal(7,2), comment:null), ]
+PREHOOK: query: explain vectorization detail select 
sum(ss_ext_list_price*ss_ext_discount_amt) from 
vector_decimal64_mul_decimal64column_tmp
+PREHOOK: type: QUERY
+PREHOOK: Input: default@vector_decimal64_mul_decimal64column_tmp
+ A masked pattern was here 
+POSTHOOK: query: explain vectorization detail select 
sum(ss_ext_list_price*ss_ext_discount_amt) from 
vector_decimal64_mul_decimal64column_tmp
+POSTHOOK: type: QUERY
+POSTHOOK: Input: default@vector_decimal64_mul_decimal64column_tmp
+ A masked pattern was here 
+PLAN VECTORIZATION:
+  enabled: true
+  enabledConditionsMet: [hive.vectorized.execution.enabled IS true]
+
+STAGE 

[jira] [Commented] (HIVE-22499) LLAP: Add an EncodedReaderOptions to extend ORC impl for options

2019-11-21 Thread Mustafa Iman (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979427#comment-16979427
 ] 

Mustafa Iman commented on HIVE-22499:
-

HIVE-22499.patch includes orc-1.5.8rc0. This will be changed when orc-1.5.8 is 
available.

> LLAP: Add an EncodedReaderOptions to extend ORC impl for options
> 
>
> Key: HIVE-22499
> URL: https://issues.apache.org/jira/browse/HIVE-22499
> Project: Hive
>  Issue Type: Bug
>  Components: llap, ORC
>Reporter: Gopal Vijayaraghavan
>Assignee: Mustafa Iman
>Priority: Major
> Attachments: HIVE-22499.WIP.patch, HIVE-22499.patch
>
>
> ORC-570 is an ABI change to the way getFileSystem() by adding an another 
> exception to the implementation.
> To accept and use that change requires waiting for an ORC release, while this 
> patch serves the same purpose though falls back for a retry for 
> FileSystem.get() in case the supplier fails at runtime.
> Also as a side-note, the FS.get() call is always used in the cases where the 
> file is not being read from a cache such as EncodedOrcFile (so the upstream 
> API change might be overkill).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22499) LLAP: Add an EncodedReaderOptions to extend ORC impl for options

2019-11-21 Thread Mustafa Iman (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mustafa Iman updated HIVE-22499:

Attachment: HIVE-22499.patch

> LLAP: Add an EncodedReaderOptions to extend ORC impl for options
> 
>
> Key: HIVE-22499
> URL: https://issues.apache.org/jira/browse/HIVE-22499
> Project: Hive
>  Issue Type: Bug
>  Components: llap, ORC
>Reporter: Gopal Vijayaraghavan
>Assignee: Mustafa Iman
>Priority: Major
> Attachments: HIVE-22499.WIP.patch, HIVE-22499.patch
>
>
> ORC-570 is an ABI change to the way getFileSystem() by adding an another 
> exception to the implementation.
> To accept and use that change requires waiting for an ORC release, while this 
> patch serves the same purpose though falls back for a retry for 
> FileSystem.get() in case the supplier fails at runtime.
> Also as a side-note, the FS.get() call is always used in the cases where the 
> file is not being read from a cache such as EncodedOrcFile (so the upstream 
> API change might be overkill).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-22499) LLAP: Add an EncodedReaderOptions to extend ORC impl for options

2019-11-21 Thread Mustafa Iman (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mustafa Iman reassigned HIVE-22499:
---

Assignee: Mustafa Iman

> LLAP: Add an EncodedReaderOptions to extend ORC impl for options
> 
>
> Key: HIVE-22499
> URL: https://issues.apache.org/jira/browse/HIVE-22499
> Project: Hive
>  Issue Type: Bug
>  Components: llap, ORC
>Reporter: Gopal Vijayaraghavan
>Assignee: Mustafa Iman
>Priority: Major
> Attachments: HIVE-22499.WIP.patch, HIVE-22499.patch
>
>
> ORC-570 is an ABI change to the way getFileSystem() by adding an another 
> exception to the implementation.
> To accept and use that change requires waiting for an ORC release, while this 
> patch serves the same purpose though falls back for a retry for 
> FileSystem.get() in case the supplier fails at runtime.
> Also as a side-note, the FS.get() call is always used in the cases where the 
> file is not being read from a cache such as EncodedOrcFile (so the upstream 
> API change might be overkill).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22523) The error handler in LlapRecordReader might block if its queue is full

2019-11-21 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979415#comment-16979415
 ] 

Hive QA commented on HIVE-22523:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12986407/HIVE-22523.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17709 tests 
executed
*Failed tests:*
{noformat}
org.apache.hive.jdbc.TestServiceDiscoveryWithMiniHS2.testGetAllUrlsDirect 
(batchId=289)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/19530/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/19530/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-19530/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12986407 - PreCommit-HIVE-Build

> The error handler in LlapRecordReader might block if its queue is full
> --
>
> Key: HIVE-22523
> URL: https://issues.apache.org/jira/browse/HIVE-22523
> Project: Hive
>  Issue Type: Bug
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22523.1.patch
>
>
> In setError() we set the value of an atomic reference (pendingError) and we 
> also put the error in a queue. The latter seems not just unnecessary but it 
> might block the caller of the handler if the queue is full. Also closing of 
> the reader is might not properly handled as some of the flags are not 
> volatile.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HIVE-16220) Memory leak when creating a table using location and NameNode in HA

2019-11-21 Thread Thomas Mann (FiduciaGAD) (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-16220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979402#comment-16979402
 ] 

Thomas Mann (FiduciaGAD) edited comment on HIVE-16220 at 11/21/19 4:29 PM:
---

can confirm same issue

 

for HDP 3.1.0 and Hive in Version 3.0.0.3.1

Circumstances: Sqoop Job importing Data from DB2 via HDFS/MapReduce and loading 
them into Hive

Configuration: NameNode in HA

 

Memory Leak:

{color:#00}44,343 instances of 
{color}*"org.apache.hadoop.hive.conf.HiveConf"*{color:#00}, loaded by 
{color}*"sun.misc.Launcher$AppClassLoader @ 0x7fa7b62f5400"*{color:#00} 
occupy {color}*18,993,039,520 (96.13%)*{color:#00} bytes. These instances 
are referenced from one instance of 
{color}*"java.util.concurrent.ConcurrentHashMap$Node[]"*{color:#00}, loaded 
by {color}*""*


was (Author: xcg2945):
can confirm same issue

for HDP 3.1.0 and Hive in Version 3.0.0.3.1 

> Memory leak when creating a table using location and NameNode in HA
> ---
>
> Key: HIVE-16220
> URL: https://issues.apache.org/jira/browse/HIVE-16220
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.2.1, 3.0.0
> Environment: HDP-2.4.0.0
> HDP-3.1.0.0
>Reporter: Angel Alvarez Pascua
>Priority: Major
>
> The following simple DDL
> CREATE TABLE `test`(`field` varchar(1)) LOCATION 
> 'hdfs://benderHA/apps/hive/warehouse/test'
> ends up generating a huge memory leak in the HiveServer2 service.
> After two weeks without a restart, the service stops suddenly because of 
> OutOfMemory errors.
> This only happens when we're in an environment in which the NameNode is in 
> HA,  otherwise, nothing (so weird) happens. If the location clause is not 
> present, everything is also fine.
> It seems, multiples instances of Hadoop configuration are created when we're 
> in an HA environment:
> 
> 2.618 instances of "org.apache.hadoop.conf.Configuration", loaded by 
> "sun.misc.Launcher$AppClassLoader @ 0x4d260de88" 
> occupy 350.263.816 (81,66%) bytes. These instances are referenced from one 
> instance of "java.util.HashMap$Node[]", 
> loaded by ""
> 
> 5.216 instances of "org.apache.hadoop.conf.Configuration", loaded by 
> "sun.misc.Launcher$AppClassLoader @ 0x4d260de88" 
> occupy 699.901.416 (87,32%) bytes. These instances are referenced from one 
> instance of "java.util.HashMap$Node[]", 
> loaded by ""



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-16220) Memory leak when creating a table using location and NameNode in HA

2019-11-21 Thread Thomas Mann (FiduciaGAD) (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-16220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mann (FiduciaGAD) updated HIVE-16220:

Environment: 
HDP-2.4.0.0

HDP-3.1.0.0

  was:HDP-2.4.0.0


> Memory leak when creating a table using location and NameNode in HA
> ---
>
> Key: HIVE-16220
> URL: https://issues.apache.org/jira/browse/HIVE-16220
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.2.1
> Environment: HDP-2.4.0.0
> HDP-3.1.0.0
>Reporter: Angel Alvarez Pascua
>Priority: Major
>
> The following simple DDL
> CREATE TABLE `test`(`field` varchar(1)) LOCATION 
> 'hdfs://benderHA/apps/hive/warehouse/test'
> ends up generating a huge memory leak in the HiveServer2 service.
> After two weeks without a restart, the service stops suddenly because of 
> OutOfMemory errors.
> This only happens when we're in an environment in which the NameNode is in 
> HA,  otherwise, nothing (so weird) happens. If the location clause is not 
> present, everything is also fine.
> It seems, multiples instances of Hadoop configuration are created when we're 
> in an HA environment:
> 
> 2.618 instances of "org.apache.hadoop.conf.Configuration", loaded by 
> "sun.misc.Launcher$AppClassLoader @ 0x4d260de88" 
> occupy 350.263.816 (81,66%) bytes. These instances are referenced from one 
> instance of "java.util.HashMap$Node[]", 
> loaded by ""
> 
> 5.216 instances of "org.apache.hadoop.conf.Configuration", loaded by 
> "sun.misc.Launcher$AppClassLoader @ 0x4d260de88" 
> occupy 699.901.416 (87,32%) bytes. These instances are referenced from one 
> instance of "java.util.HashMap$Node[]", 
> loaded by ""



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-16220) Memory leak when creating a table using location and NameNode in HA

2019-11-21 Thread Thomas Mann (FiduciaGAD) (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-16220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Mann (FiduciaGAD) updated HIVE-16220:

Affects Version/s: 3.0.0

> Memory leak when creating a table using location and NameNode in HA
> ---
>
> Key: HIVE-16220
> URL: https://issues.apache.org/jira/browse/HIVE-16220
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.2.1, 3.0.0
> Environment: HDP-2.4.0.0
> HDP-3.1.0.0
>Reporter: Angel Alvarez Pascua
>Priority: Major
>
> The following simple DDL
> CREATE TABLE `test`(`field` varchar(1)) LOCATION 
> 'hdfs://benderHA/apps/hive/warehouse/test'
> ends up generating a huge memory leak in the HiveServer2 service.
> After two weeks without a restart, the service stops suddenly because of 
> OutOfMemory errors.
> This only happens when we're in an environment in which the NameNode is in 
> HA,  otherwise, nothing (so weird) happens. If the location clause is not 
> present, everything is also fine.
> It seems, multiples instances of Hadoop configuration are created when we're 
> in an HA environment:
> 
> 2.618 instances of "org.apache.hadoop.conf.Configuration", loaded by 
> "sun.misc.Launcher$AppClassLoader @ 0x4d260de88" 
> occupy 350.263.816 (81,66%) bytes. These instances are referenced from one 
> instance of "java.util.HashMap$Node[]", 
> loaded by ""
> 
> 5.216 instances of "org.apache.hadoop.conf.Configuration", loaded by 
> "sun.misc.Launcher$AppClassLoader @ 0x4d260de88" 
> occupy 699.901.416 (87,32%) bytes. These instances are referenced from one 
> instance of "java.util.HashMap$Node[]", 
> loaded by ""



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-16220) Memory leak when creating a table using location and NameNode in HA

2019-11-21 Thread Thomas Mann (FiduciaGAD) (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-16220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979402#comment-16979402
 ] 

Thomas Mann (FiduciaGAD) commented on HIVE-16220:
-

can confirm same issue

for HDP 3.1.0 and Hive in Version 3.0.0.3.1 

> Memory leak when creating a table using location and NameNode in HA
> ---
>
> Key: HIVE-16220
> URL: https://issues.apache.org/jira/browse/HIVE-16220
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 1.2.1
> Environment: HDP-2.4.0.0
>Reporter: Angel Alvarez Pascua
>Priority: Major
>
> The following simple DDL
> CREATE TABLE `test`(`field` varchar(1)) LOCATION 
> 'hdfs://benderHA/apps/hive/warehouse/test'
> ends up generating a huge memory leak in the HiveServer2 service.
> After two weeks without a restart, the service stops suddenly because of 
> OutOfMemory errors.
> This only happens when we're in an environment in which the NameNode is in 
> HA,  otherwise, nothing (so weird) happens. If the location clause is not 
> present, everything is also fine.
> It seems, multiples instances of Hadoop configuration are created when we're 
> in an HA environment:
> 
> 2.618 instances of "org.apache.hadoop.conf.Configuration", loaded by 
> "sun.misc.Launcher$AppClassLoader @ 0x4d260de88" 
> occupy 350.263.816 (81,66%) bytes. These instances are referenced from one 
> instance of "java.util.HashMap$Node[]", 
> loaded by ""
> 
> 5.216 instances of "org.apache.hadoop.conf.Configuration", loaded by 
> "sun.misc.Launcher$AppClassLoader @ 0x4d260de88" 
> occupy 699.901.416 (87,32%) bytes. These instances are referenced from one 
> instance of "java.util.HashMap$Node[]", 
> loaded by ""



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22483) Vectorize UDF datetime_legacy_hybrid_calendar

2019-11-21 Thread Jira


[ 
https://issues.apache.org/jira/browse/HIVE-22483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979394#comment-16979394
 ] 

Ádám Szita commented on HIVE-22483:
---

Looks good, +1 on latest [^HIVE-22483.05.patch]

> Vectorize UDF datetime_legacy_hybrid_calendar
> -
>
> Key: HIVE-22483
> URL: https://issues.apache.org/jira/browse/HIVE-22483
> Project: Hive
>  Issue Type: Improvement
>Reporter: Karen Coppage
>Assignee: Karen Coppage
>Priority: Minor
> Fix For: 4.0.0
>
> Attachments: HIVE-22483.01.patch, HIVE-22483.02.patch, 
> HIVE-22483.03.patch, HIVE-22483.04.patch, HIVE-22483.04.patch, 
> HIVE-22483.04.patch, HIVE-22483.05.patch, HIVE-22483.05.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22524) CommandProcessorException should utilize standard Exception fields

2019-11-21 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-22524:

Status: Patch Available  (was: Open)

> CommandProcessorException should utilize standard Exception fields
> --
>
> Key: HIVE-22524
> URL: https://issues.apache.org/jira/browse/HIVE-22524
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-22524.01.patch
>
>
> CommandProcessorException right now has:
> * getCause() inherited from Exception
> * getException() local implementation
> * getMessage() inherited from Exception
> * getErrorMessage() local implementation



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22523) The error handler in LlapRecordReader might block if its queue is full

2019-11-21 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16979376#comment-16979376
 ] 

Hive QA commented on HIVE-22523:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
46s{color} | {color:blue} llap-server in master has 90 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-19530/dev-support/hive-personality.sh
 |
| git revision | master / df8e185 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: llap-server U: llap-server |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-19530/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> The error handler in LlapRecordReader might block if its queue is full
> --
>
> Key: HIVE-22523
> URL: https://issues.apache.org/jira/browse/HIVE-22523
> Project: Hive
>  Issue Type: Bug
>Reporter: Attila Magyar
>Assignee: Attila Magyar
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22523.1.patch
>
>
> In setError() we set the value of an atomic reference (pendingError) and we 
> also put the error in a queue. The latter seems not just unnecessary but it 
> might block the caller of the handler if the queue is full. Also closing of 
> the reader is might not properly handled as some of the flags are not 
> volatile.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22524) CommandProcessorException should utilize standard Exception fields

2019-11-21 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-22524:

Attachment: HIVE-22524.01.patch

> CommandProcessorException should utilize standard Exception fields
> --
>
> Key: HIVE-22524
> URL: https://issues.apache.org/jira/browse/HIVE-22524
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-22524.01.patch
>
>
> CommandProcessorException right now has:
> * getCause() inherited from Exception
> * getException() local implementation
> * getMessage() inherited from Exception
> * getErrorMessage() local implementation



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22486) Send only accessed columns for masking policies request

2019-11-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22486?focusedWorklogId=347459=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347459
 ]

ASF GitHub Bot logged work on HIVE-22486:
-

Author: ASF GitHub Bot
Created on: 21/Nov/19 15:32
Start Date: 21/Nov/19 15:32
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk commented on pull request #848: HIVE-22486
URL: https://github.com/apache/hive/pull/848#discussion_r349153623
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveRelFieldTrimmer.java
 ##
 @@ -674,10 +677,14 @@ public TrimResult trimFields(Project project, 
ImmutableBitSet fieldsUsed,
 // set columnAccessInfo for ViewColumnAuthorization
 for (Ord ord : Ord.zip(project.getProjects())) {
   if (fieldsUsed.get(ord.i)) {
-if (this.columnAccessInfo != null && this.viewProjectToTableSchema != 
null
-&& this.viewProjectToTableSchema.containsKey(project)) {
+if (this.viewProjectToTableSchema != null && 
this.viewProjectToTableSchema.containsKey(project)) {
   Table tab = this.viewProjectToTableSchema.get(project);
-  this.columnAccessInfo.add(tab.getCompleteName(), 
tab.getAllCols().get(ord.i).getName());
+  if (this.directColumnAccessInfo != null) {
+this.directColumnAccessInfo.add(tab.getCompleteName(), 
tab.getAllCols().get(ord.i).getName());
+  }
+  if (this.allColumnAccessInfo != null) {
+this.allColumnAccessInfo.add(tab.getCompleteName(), 
tab.getAllCols().get(ord.i).getName());
+  }
 
 Review comment:
   or...I'm right now thinking about deciding whether to use the "usage aware" 
one or not beforehand - and use a specifically tailored `ColumnAccesInfo` 
implementation 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 347459)
Time Spent: 0.5h  (was: 20m)

> Send only accessed columns for masking policies request
> ---
>
> Key: HIVE-22486
> URL: https://issues.apache.org/jira/browse/HIVE-22486
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Affects Versions: 4.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22486.01.patch, HIVE-22486.02.patch, 
> HIVE-22486.03.patch, HIVE-22486.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently, we send all columns for masking request, even if they are not 
> accessed by the given query. We could send only those columns for which the 
> masking policy will be necessary.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-22486) Send only accessed columns for masking policies request

2019-11-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22486?focusedWorklogId=347458=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-347458
 ]

ASF GitHub Bot logged work on HIVE-22486:
-

Author: ASF GitHub Bot
Created on: 21/Nov/19 15:28
Start Date: 21/Nov/19 15:28
Worklog Time Spent: 10m 
  Work Description: kgyrtkirk commented on pull request #848: HIVE-22486
URL: https://github.com/apache/hive/pull/848#discussion_r349151290
 
 

 ##
 File path: 
ql/src/java/org/apache/hadoop/hive/ql/optimizer/calcite/rules/HiveRelFieldTrimmer.java
 ##
 @@ -674,10 +677,14 @@ public TrimResult trimFields(Project project, 
ImmutableBitSet fieldsUsed,
 // set columnAccessInfo for ViewColumnAuthorization
 for (Ord ord : Ord.zip(project.getProjects())) {
   if (fieldsUsed.get(ord.i)) {
-if (this.columnAccessInfo != null && this.viewProjectToTableSchema != 
null
-&& this.viewProjectToTableSchema.containsKey(project)) {
+if (this.viewProjectToTableSchema != null && 
this.viewProjectToTableSchema.containsKey(project)) {
   Table tab = this.viewProjectToTableSchema.get(project);
-  this.columnAccessInfo.add(tab.getCompleteName(), 
tab.getAllCols().get(ord.i).getName());
+  if (this.directColumnAccessInfo != null) {
+this.directColumnAccessInfo.add(tab.getCompleteName(), 
tab.getAllCols().get(ord.i).getName());
+  }
+  if (this.allColumnAccessInfo != null) {
+this.allColumnAccessInfo.add(tab.getCompleteName(), 
tab.getAllCols().get(ord.i).getName());
+  }
 
 Review comment:
   this is actually duplicates the `ColumnAccessInfo` at a lot of places; 
wouldn't it make sense to extend `ColumnAccessInfo` internally; and enable it 
to "mark" columns; and add a method which is able to retrieve columns based on 
usage or not.
   
   right now `CAI` is a `Map< TableNameString, Set>`; how 
about changing this to a `Map` where 
`ColumnAccess` would be the `ColumnNameString` and some extra info which is 
added here.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 347458)
Time Spent: 20m  (was: 10m)

> Send only accessed columns for masking policies request
> ---
>
> Key: HIVE-22486
> URL: https://issues.apache.org/jira/browse/HIVE-22486
> Project: Hive
>  Issue Type: Improvement
>  Components: CBO
>Affects Versions: 4.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22486.01.patch, HIVE-22486.02.patch, 
> HIVE-22486.03.patch, HIVE-22486.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently, we send all columns for masking request, even if they are not 
> accessed by the given query. We could send only those columns for which the 
> masking policy will be necessary.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22505) ClassCastException caused by wrong Vectorized operator selection

2019-11-21 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated HIVE-22505:
--
Status: Patch Available  (was: Open)

> ClassCastException caused by wrong Vectorized operator selection
> 
>
> Key: HIVE-22505
> URL: https://issues.apache.org/jira/browse/HIVE-22505
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Critical
> Attachments: HIVE-22505.2.patch, HIVE-22505.3.patch, 
> HIVE-22505.4.patch, HIVE-22505.5.patch, HIVE-22505.6.patch, 
> HIVE-22505.7.patch, HIVE-22505.patch, query_error.out, 
> query_vector_explain.out, vectorized_join.q
>
>
> VectorMapJoinOuterFilteredOperator does not currently support full outer 
> joins but using the current Vectorizer logic it can be selected when a there 
> is a filter involved. This can make queries fail with ClassCastException when 
> their data and metadata in the VectorMapJoinOuterFilteredOperator do not 
> match.
> The query attached demonstrates the issue and the log attached shows the 
> java.lang.ClassCastException



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22505) ClassCastException caused by wrong Vectorized operator selection

2019-11-21 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated HIVE-22505:
--
Status: Open  (was: Patch Available)

> ClassCastException caused by wrong Vectorized operator selection
> 
>
> Key: HIVE-22505
> URL: https://issues.apache.org/jira/browse/HIVE-22505
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Critical
> Attachments: HIVE-22505.2.patch, HIVE-22505.3.patch, 
> HIVE-22505.4.patch, HIVE-22505.5.patch, HIVE-22505.6.patch, 
> HIVE-22505.7.patch, HIVE-22505.patch, query_error.out, 
> query_vector_explain.out, vectorized_join.q
>
>
> VectorMapJoinOuterFilteredOperator does not currently support full outer 
> joins but using the current Vectorizer logic it can be selected when a there 
> is a filter involved. This can make queries fail with ClassCastException when 
> their data and metadata in the VectorMapJoinOuterFilteredOperator do not 
> match.
> The query attached demonstrates the issue and the log attached shows the 
> java.lang.ClassCastException



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22505) ClassCastException caused by wrong Vectorized operator selection

2019-11-21 Thread Panagiotis Garefalakis (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Panagiotis Garefalakis updated HIVE-22505:
--
Attachment: HIVE-22505.7.patch

> ClassCastException caused by wrong Vectorized operator selection
> 
>
> Key: HIVE-22505
> URL: https://issues.apache.org/jira/browse/HIVE-22505
> Project: Hive
>  Issue Type: Bug
>  Components: Vectorization
>Reporter: Panagiotis Garefalakis
>Assignee: Panagiotis Garefalakis
>Priority: Critical
> Attachments: HIVE-22505.2.patch, HIVE-22505.3.patch, 
> HIVE-22505.4.patch, HIVE-22505.5.patch, HIVE-22505.6.patch, 
> HIVE-22505.7.patch, HIVE-22505.patch, query_error.out, 
> query_vector_explain.out, vectorized_join.q
>
>
> VectorMapJoinOuterFilteredOperator does not currently support full outer 
> joins but using the current Vectorizer logic it can be selected when a there 
> is a filter involved. This can make queries fail with ClassCastException when 
> their data and metadata in the VectorMapJoinOuterFilteredOperator do not 
> match.
> The query attached demonstrates the issue and the log attached shows the 
> java.lang.ClassCastException



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >