[jira] [Updated] (HIVE-22712) ReExec Driver execute submit the query in default queue irrespective of user defined queue

2020-01-10 Thread Rajkumar Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-22712:
--
Status: Open  (was: Patch Available)

> ReExec Driver execute submit the query in default queue irrespective of user 
> defined queue
> --
>
> Key: HIVE-22712
> URL: https://issues.apache.org/jira/browse/HIVE-22712
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2
>Affects Versions: 3.1.2
> Environment: Hive-3
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-22712.01.patch, HIVE-22712.patch
>
>
> we unset the queue name intentionally in 
> TezSessionState#startSessionAndContainers, 
> as a result reexec create a new session in the default queue and create a 
> problem, its a cumbersome to add reexec.overlay.tez.queue.name at session 
> level.
> I could not find a better way of setting the queue name (I am open for the 
> suggestion here) since it can create a  conflict with the Global queue name 
> vs user-defined queue that's why setting while initialization of 
> ReExecutionOverlayPlugin.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22716) Reading to ByteBuffer is broken in ParquetFooterInputFromCache

2020-01-10 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17013186#comment-17013186
 ] 

Hive QA commented on HIVE-22716:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12990557/HIVE-22716.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 17872 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.metastore.client.TestCatalogs.alterChangeName[Remote] 
(batchId=229)
org.apache.hadoop.hive.metastore.client.TestCatalogs.alterNonExistentCatalog[Remote]
 (batchId=229)
org.apache.hadoop.hive.metastore.client.TestCatalogs.catalogOperations[Remote] 
(batchId=229)
org.apache.hadoop.hive.metastore.client.TestCatalogs.dropCatalogWithNonEmptyDefaultDb[Remote]
 (batchId=229)
org.apache.hadoop.hive.metastore.client.TestCatalogs.dropHiveCatalog[Remote] 
(batchId=229)
org.apache.hadoop.hive.metastore.client.TestCatalogs.dropNonEmptyCatalog[Remote]
 (batchId=229)
org.apache.hadoop.hive.metastore.client.TestCatalogs.dropNonExistentCatalog[Remote]
 (batchId=229)
org.apache.hadoop.hive.metastore.client.TestCatalogs.getNonExistentCatalog[Remote]
 (batchId=229)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20147/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20147/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20147/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 8 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12990557 - PreCommit-HIVE-Build

> Reading to ByteBuffer is broken in ParquetFooterInputFromCache
> --
>
> Key: HIVE-22716
> URL: https://issues.apache.org/jira/browse/HIVE-22716
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22716.1.patch, HIVE-22716.2.patch, 
> HIVE-22716.3.patch
>
>
> The ParquetFooterInputFromCache.read(ByteBuffer bb) calls the readInternal 
> method with the result parameter passed as 'len'. The value of the result 
> parameter will always be -1 at this point, and because of this, the 
> readInternal method won't read anything.
> {noformat}
>   public int read(ByteBuffer bb) throws IOException {
> // Simple implementation for now - currently Parquet uses heap buffers.
> int result = -1;
> if (bb.hasArray()) {
>   result = readInternal(bb.array(), bb.arrayOffset(), result);  // The 
> readInternal is called with result=-1
>   if (result > 0) {
> bb.position(bb.position() + result);
>   }
> } else {
>   byte[] b = new byte[bb.remaining()];
>   result = readInternal(b, 0, result); // The readInternal is called 
> with result=-1
>   bb.put(b, 0, result);
> }
> return result;
>   }
> {noformat}
> {noformat}
>   public int readInternal(byte[] b, int offset, int len) {
> if (position >= length) return -1;
> int argPos = offset, argEnd = offset + len;  // Here argEnd will be -1
> while (argPos < argEnd) { // This condition will never be 
> true, since argEnd=-1
>   if (bufferIx == cacheData.length) return (argPos - offset);
>   ByteBuffer data = cacheData[bufferIx].getByteBufferDup();
>   int toConsume = Math.min(argEnd - argPos, data.remaining() - bufferPos);
>   data.position(data.position() + bufferPos);
>   data.get(b, argPos, toConsume);
>   if (data.remaining() == 0) {
> ++bufferIx;
> bufferPos = 0;
>   } else {
> bufferPos += toConsume;
>   }
>   argPos += toConsume;
> }
> return len;
>   }
> {noformat}
> The read(ByteBuffer bb) method wasn't called before, but in the 1.11.0 
> Parquet version, there were some optimizations (PARQUET-1542), so this method 
> is called now. Because of this bug, the TestMiniLlapCliDriver and 
> TestMiniLlapLocalCliDriver q tests are failing with the new Parquet version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22712) ReExec Driver execute submit the query in default queue irrespective of user defined queue

2020-01-10 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17013269#comment-17013269
 ] 

Hive QA commented on HIVE-22712:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12990565/HIVE-22712.01.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20149/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20149/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20149/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2020-01-10 23:23:46.543
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-20149/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2020-01-10 23:23:46.547
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at dfcb0a4 HIVE-22595: Dynamic partition inserts fail on Avro table 
table with external schema (Jason Dere, reviewed by Jesus Camacho Rodriguez)
+ git clean -f -d
Removing standalone-metastore/metastore-server/src/gen/
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at dfcb0a4 HIVE-22595: Dynamic partition inserts fail on Avro table 
table with external schema (Jason Dere, reviewed by Jesus Camacho Rodriguez)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2020-01-10 23:23:49.335
+ rm -rf ../yetus_PreCommit-HIVE-Build-20149
+ mkdir ../yetus_PreCommit-HIVE-Build-20149
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-20149
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-20149/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
Trying to apply the patch with -p0
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/reexec/ReExecutionOverlayPlugin.java: 
does not exist in index
Trying to apply the patch with -p1
Going to apply patch with: git apply -p1
+ [[ maven == \m\a\v\e\n ]]
+ rm -rf /data/hiveptest/working/maven/org/apache/hive
+ mvn -B clean install -DskipTests -T 4 -q 
-Dmaven.repo.local=/data/hiveptest/working/maven
protoc-jar: executing: [/tmp/protoc5288251547150808943.exe, --version]
libprotoc 2.5.0
protoc-jar: executing: [/tmp/protoc5288251547150808943.exe, 
-I/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore,
 
--java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/target/generated-sources,
 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/metastore-common/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto]
ANTLR Parser Generator  Version 3.5.2
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process 
(process-resource-bundles) on project hive-shims-0.23: Execution 
process-resource-bundles of goal 
org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process failed. 
ConcurrentModificationException -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hive-shims-0.23
+ result=1
+ '[' 1 -ne 0 ']'
+ rm -rf 

[jira] [Commented] (HIVE-22716) Reading to ByteBuffer is broken in ParquetFooterInputFromCache

2020-01-10 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17013268#comment-17013268
 ] 

Hive QA commented on HIVE-22716:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12990563/HIVE-22716.4.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17872 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20148/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20148/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20148/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12990563 - PreCommit-HIVE-Build

> Reading to ByteBuffer is broken in ParquetFooterInputFromCache
> --
>
> Key: HIVE-22716
> URL: https://issues.apache.org/jira/browse/HIVE-22716
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22716.1.patch, HIVE-22716.2.patch, 
> HIVE-22716.3.patch, HIVE-22716.4.patch
>
>
> The ParquetFooterInputFromCache.read(ByteBuffer bb) calls the readInternal 
> method with the result parameter passed as 'len'. The value of the result 
> parameter will always be -1 at this point, and because of this, the 
> readInternal method won't read anything.
> {noformat}
>   public int read(ByteBuffer bb) throws IOException {
> // Simple implementation for now - currently Parquet uses heap buffers.
> int result = -1;
> if (bb.hasArray()) {
>   result = readInternal(bb.array(), bb.arrayOffset(), result);  // The 
> readInternal is called with result=-1
>   if (result > 0) {
> bb.position(bb.position() + result);
>   }
> } else {
>   byte[] b = new byte[bb.remaining()];
>   result = readInternal(b, 0, result); // The readInternal is called 
> with result=-1
>   bb.put(b, 0, result);
> }
> return result;
>   }
> {noformat}
> {noformat}
>   public int readInternal(byte[] b, int offset, int len) {
> if (position >= length) return -1;
> int argPos = offset, argEnd = offset + len;  // Here argEnd will be -1
> while (argPos < argEnd) { // This condition will never be 
> true, since argEnd=-1
>   if (bufferIx == cacheData.length) return (argPos - offset);
>   ByteBuffer data = cacheData[bufferIx].getByteBufferDup();
>   int toConsume = Math.min(argEnd - argPos, data.remaining() - bufferPos);
>   data.position(data.position() + bufferPos);
>   data.get(b, argPos, toConsume);
>   if (data.remaining() == 0) {
> ++bufferIx;
> bufferPos = 0;
>   } else {
> bufferPos += toConsume;
>   }
>   argPos += toConsume;
> }
> return len;
>   }
> {noformat}
> The read(ByteBuffer bb) method wasn't called before, but in the 1.11.0 
> Parquet version, there were some optimizations (PARQUET-1542), so this method 
> is called now. Because of this bug, the TestMiniLlapCliDriver and 
> TestMiniLlapLocalCliDriver q tests are failing with the new Parquet version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22716) Reading to ByteBuffer is broken in ParquetFooterInputFromCache

2020-01-10 Thread Marta Kuczora (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marta Kuczora updated HIVE-22716:
-
Attachment: HIVE-22716.3.patch

> Reading to ByteBuffer is broken in ParquetFooterInputFromCache
> --
>
> Key: HIVE-22716
> URL: https://issues.apache.org/jira/browse/HIVE-22716
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22716.1.patch, HIVE-22716.2.patch, 
> HIVE-22716.3.patch
>
>
> The ParquetFooterInputFromCache.read(ByteBuffer bb) calls the readInternal 
> method with the result parameter passed as 'len'. The value of the result 
> parameter will always be -1 at this point, and because of this, the 
> readInternal method won't read anything.
> {noformat}
>   public int read(ByteBuffer bb) throws IOException {
> // Simple implementation for now - currently Parquet uses heap buffers.
> int result = -1;
> if (bb.hasArray()) {
>   result = readInternal(bb.array(), bb.arrayOffset(), result);  // The 
> readInternal is called with result=-1
>   if (result > 0) {
> bb.position(bb.position() + result);
>   }
> } else {
>   byte[] b = new byte[bb.remaining()];
>   result = readInternal(b, 0, result); // The readInternal is called 
> with result=-1
>   bb.put(b, 0, result);
> }
> return result;
>   }
> {noformat}
> {noformat}
>   public int readInternal(byte[] b, int offset, int len) {
> if (position >= length) return -1;
> int argPos = offset, argEnd = offset + len;  // Here argEnd will be -1
> while (argPos < argEnd) { // This condition will never be 
> true, since argEnd=-1
>   if (bufferIx == cacheData.length) return (argPos - offset);
>   ByteBuffer data = cacheData[bufferIx].getByteBufferDup();
>   int toConsume = Math.min(argEnd - argPos, data.remaining() - bufferPos);
>   data.position(data.position() + bufferPos);
>   data.get(b, argPos, toConsume);
>   if (data.remaining() == 0) {
> ++bufferIx;
> bufferPos = 0;
>   } else {
> bufferPos += toConsume;
>   }
>   argPos += toConsume;
> }
> return len;
>   }
> {noformat}
> The read(ByteBuffer bb) method wasn't called before, but in the 1.11.0 
> Parquet version, there were some optimizations (PARQUET-1542), so this method 
> is called now. Because of this bug, the TestMiniLlapCliDriver and 
> TestMiniLlapLocalCliDriver q tests are failing with the new Parquet version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22712) ReExec Driver execute submit the query in default queue irrespective of user defined queue

2020-01-10 Thread Rajkumar Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-22712:
--
Status: Open  (was: Patch Available)

> ReExec Driver execute submit the query in default queue irrespective of user 
> defined queue
> --
>
> Key: HIVE-22712
> URL: https://issues.apache.org/jira/browse/HIVE-22712
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2
>Affects Versions: 3.1.2
> Environment: Hive-3
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-22712.01.patch, HIVE-22712.patch
>
>
> we unset the queue name intentionally in 
> TezSessionState#startSessionAndContainers, 
> as a result reexec create a new session in the default queue and create a 
> problem, its a cumbersome to add reexec.overlay.tez.queue.name at session 
> level.
> I could not find a better way of setting the queue name (I am open for the 
> suggestion here) since it can create a  conflict with the Global queue name 
> vs user-defined queue that's why setting while initialization of 
> ReExecutionOverlayPlugin.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22712) ReExec Driver execute submit the query in default queue irrespective of user defined queue

2020-01-10 Thread Rajkumar Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-22712:
--
Attachment: HIVE-22712.01.patch
Status: Patch Available  (was: Open)

> ReExec Driver execute submit the query in default queue irrespective of user 
> defined queue
> --
>
> Key: HIVE-22712
> URL: https://issues.apache.org/jira/browse/HIVE-22712
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2
>Affects Versions: 3.1.2
> Environment: Hive-3
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-22712.01.patch, HIVE-22712.patch
>
>
> we unset the queue name intentionally in 
> TezSessionState#startSessionAndContainers, 
> as a result reexec create a new session in the default queue and create a 
> problem, its a cumbersome to add reexec.overlay.tez.queue.name at session 
> level.
> I could not find a better way of setting the queue name (I am open for the 
> suggestion here) since it can create a  conflict with the Global queue name 
> vs user-defined queue that's why setting while initialization of 
> ReExecutionOverlayPlugin.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22712) ReExec Driver execute submit the query in default queue irrespective of user defined queue

2020-01-10 Thread Rajkumar Singh (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17013227#comment-17013227
 ] 

Rajkumar Singh commented on HIVE-22712:
---

test failures seem unrelated to the patch, I ran the test manually which 
succeeded, uploading the patch again for a clean run.

> ReExec Driver execute submit the query in default queue irrespective of user 
> defined queue
> --
>
> Key: HIVE-22712
> URL: https://issues.apache.org/jira/browse/HIVE-22712
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2
>Affects Versions: 3.1.2
> Environment: Hive-3
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-22712.01.patch, HIVE-22712.patch
>
>
> we unset the queue name intentionally in 
> TezSessionState#startSessionAndContainers, 
> as a result reexec create a new session in the default queue and create a 
> problem, its a cumbersome to add reexec.overlay.tez.queue.name at session 
> level.
> I could not find a better way of setting the queue name (I am open for the 
> suggestion here) since it can create a  conflict with the Global queue name 
> vs user-defined queue that's why setting while initialization of 
> ReExecutionOverlayPlugin.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22716) Reading to ByteBuffer is broken in ParquetFooterInputFromCache

2020-01-10 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17013236#comment-17013236
 ] 

Hive QA commented on HIVE-22716:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
8s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20148/dev-support/hive-personality.sh
 |
| git revision | master / dfcb0a4 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20148/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Reading to ByteBuffer is broken in ParquetFooterInputFromCache
> --
>
> Key: HIVE-22716
> URL: https://issues.apache.org/jira/browse/HIVE-22716
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22716.1.patch, HIVE-22716.2.patch, 
> HIVE-22716.3.patch, HIVE-22716.4.patch
>
>
> The ParquetFooterInputFromCache.read(ByteBuffer bb) calls the readInternal 
> method with the result parameter passed as 'len'. The value of the result 
> parameter will always be -1 at this point, and because of this, the 
> readInternal method won't read anything.
> {noformat}
>   public int read(ByteBuffer bb) throws IOException {
> // Simple implementation for now - currently Parquet uses heap buffers.
> int result = -1;
> if (bb.hasArray()) {
>   result = readInternal(bb.array(), bb.arrayOffset(), result);  // The 
> readInternal is called with result=-1
>   if (result > 0) {
> bb.position(bb.position() + result);
>   }
> } else {
>   byte[] b = new byte[bb.remaining()];
>   result = readInternal(b, 0, result); // The readInternal is called 
> with result=-1
>   bb.put(b, 0, result);
> }
> return result;
>   }
> {noformat}
> {noformat}
>   public int readInternal(byte[] b, int offset, int len) {
> if (position >= length) return -1;
> int argPos = offset, argEnd = offset + len;  // Here argEnd will be -1
> while (argPos < argEnd) { // This condition 

[jira] [Commented] (HIVE-22716) Reading to ByteBuffer is broken in ParquetFooterInputFromCache

2020-01-10 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17013149#comment-17013149
 ] 

Hive QA commented on HIVE-22716:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12990535/HIVE-22716.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17872 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[timestamptz_2] 
(batchId=90)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20146/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20146/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20146/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12990535 - PreCommit-HIVE-Build

> Reading to ByteBuffer is broken in ParquetFooterInputFromCache
> --
>
> Key: HIVE-22716
> URL: https://issues.apache.org/jira/browse/HIVE-22716
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22716.1.patch, HIVE-22716.2.patch
>
>
> The ParquetFooterInputFromCache.read(ByteBuffer bb) calls the readInternal 
> method with the result parameter passed as 'len'. The value of the result 
> parameter will always be -1 at this point, and because of this, the 
> readInternal method won't read anything.
> {noformat}
>   public int read(ByteBuffer bb) throws IOException {
> // Simple implementation for now - currently Parquet uses heap buffers.
> int result = -1;
> if (bb.hasArray()) {
>   result = readInternal(bb.array(), bb.arrayOffset(), result);  // The 
> readInternal is called with result=-1
>   if (result > 0) {
> bb.position(bb.position() + result);
>   }
> } else {
>   byte[] b = new byte[bb.remaining()];
>   result = readInternal(b, 0, result); // The readInternal is called 
> with result=-1
>   bb.put(b, 0, result);
> }
> return result;
>   }
> {noformat}
> {noformat}
>   public int readInternal(byte[] b, int offset, int len) {
> if (position >= length) return -1;
> int argPos = offset, argEnd = offset + len;  // Here argEnd will be -1
> while (argPos < argEnd) { // This condition will never be 
> true, since argEnd=-1
>   if (bufferIx == cacheData.length) return (argPos - offset);
>   ByteBuffer data = cacheData[bufferIx].getByteBufferDup();
>   int toConsume = Math.min(argEnd - argPos, data.remaining() - bufferPos);
>   data.position(data.position() + bufferPos);
>   data.get(b, argPos, toConsume);
>   if (data.remaining() == 0) {
> ++bufferIx;
> bufferPos = 0;
>   } else {
> bufferPos += toConsume;
>   }
>   argPos += toConsume;
> }
> return len;
>   }
> {noformat}
> The read(ByteBuffer bb) method wasn't called before, but in the 1.11.0 
> Parquet version, there were some optimizations (PARQUET-1542), so this method 
> is called now. Because of this bug, the TestMiniLlapCliDriver and 
> TestMiniLlapLocalCliDriver q tests are failing with the new Parquet version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22712) ReExec Driver execute submit the query in default queue irrespective of user defined queue

2020-01-10 Thread Rajkumar Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajkumar Singh updated HIVE-22712:
--
Attachment: HIVE-22712.02.patch
Status: Patch Available  (was: Open)

> ReExec Driver execute submit the query in default queue irrespective of user 
> defined queue
> --
>
> Key: HIVE-22712
> URL: https://issues.apache.org/jira/browse/HIVE-22712
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2
>Affects Versions: 3.1.2
> Environment: Hive-3
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-22712.01.patch, HIVE-22712.02.patch, 
> HIVE-22712.patch
>
>
> we unset the queue name intentionally in 
> TezSessionState#startSessionAndContainers, 
> as a result reexec create a new session in the default queue and create a 
> problem, its a cumbersome to add reexec.overlay.tez.queue.name at session 
> level.
> I could not find a better way of setting the queue name (I am open for the 
> suggestion here) since it can create a  conflict with the Global queue name 
> vs user-defined queue that's why setting while initialization of 
> ReExecutionOverlayPlugin.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22716) Reading to ByteBuffer is broken in ParquetFooterInputFromCache

2020-01-10 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17013162#comment-17013162
 ] 

Hive QA commented on HIVE-22716:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
9s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 16s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20147/dev-support/hive-personality.sh
 |
| git revision | master / dfcb0a4 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20147/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Reading to ByteBuffer is broken in ParquetFooterInputFromCache
> --
>
> Key: HIVE-22716
> URL: https://issues.apache.org/jira/browse/HIVE-22716
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22716.1.patch, HIVE-22716.2.patch, 
> HIVE-22716.3.patch
>
>
> The ParquetFooterInputFromCache.read(ByteBuffer bb) calls the readInternal 
> method with the result parameter passed as 'len'. The value of the result 
> parameter will always be -1 at this point, and because of this, the 
> readInternal method won't read anything.
> {noformat}
>   public int read(ByteBuffer bb) throws IOException {
> // Simple implementation for now - currently Parquet uses heap buffers.
> int result = -1;
> if (bb.hasArray()) {
>   result = readInternal(bb.array(), bb.arrayOffset(), result);  // The 
> readInternal is called with result=-1
>   if (result > 0) {
> bb.position(bb.position() + result);
>   }
> } else {
>   byte[] b = new byte[bb.remaining()];
>   result = readInternal(b, 0, result); // The readInternal is called 
> with result=-1
>   bb.put(b, 0, result);
> }
> return result;
>   }
> {noformat}
> {noformat}
>   public int readInternal(byte[] b, int offset, int len) {
> if (position >= length) return -1;
> int argPos = offset, argEnd = offset + len;  // Here argEnd will be -1
> while (argPos < argEnd) { // This condition will never be 
> 

[jira] [Updated] (HIVE-22716) Reading to ByteBuffer is broken in ParquetFooterInputFromCache

2020-01-10 Thread Marta Kuczora (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marta Kuczora updated HIVE-22716:
-
Attachment: HIVE-22716.4.patch

> Reading to ByteBuffer is broken in ParquetFooterInputFromCache
> --
>
> Key: HIVE-22716
> URL: https://issues.apache.org/jira/browse/HIVE-22716
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22716.1.patch, HIVE-22716.2.patch, 
> HIVE-22716.3.patch, HIVE-22716.4.patch
>
>
> The ParquetFooterInputFromCache.read(ByteBuffer bb) calls the readInternal 
> method with the result parameter passed as 'len'. The value of the result 
> parameter will always be -1 at this point, and because of this, the 
> readInternal method won't read anything.
> {noformat}
>   public int read(ByteBuffer bb) throws IOException {
> // Simple implementation for now - currently Parquet uses heap buffers.
> int result = -1;
> if (bb.hasArray()) {
>   result = readInternal(bb.array(), bb.arrayOffset(), result);  // The 
> readInternal is called with result=-1
>   if (result > 0) {
> bb.position(bb.position() + result);
>   }
> } else {
>   byte[] b = new byte[bb.remaining()];
>   result = readInternal(b, 0, result); // The readInternal is called 
> with result=-1
>   bb.put(b, 0, result);
> }
> return result;
>   }
> {noformat}
> {noformat}
>   public int readInternal(byte[] b, int offset, int len) {
> if (position >= length) return -1;
> int argPos = offset, argEnd = offset + len;  // Here argEnd will be -1
> while (argPos < argEnd) { // This condition will never be 
> true, since argEnd=-1
>   if (bufferIx == cacheData.length) return (argPos - offset);
>   ByteBuffer data = cacheData[bufferIx].getByteBufferDup();
>   int toConsume = Math.min(argEnd - argPos, data.remaining() - bufferPos);
>   data.position(data.position() + bufferPos);
>   data.get(b, argPos, toConsume);
>   if (data.remaining() == 0) {
> ++bufferIx;
> bufferPos = 0;
>   } else {
> bufferPos += toConsume;
>   }
>   argPos += toConsume;
> }
> return len;
>   }
> {noformat}
> The read(ByteBuffer bb) method wasn't called before, but in the 1.11.0 
> Parquet version, there were some optimizations (PARQUET-1542), so this method 
> is called now. Because of this bug, the TestMiniLlapCliDriver and 
> TestMiniLlapLocalCliDriver q tests are failing with the new Parquet version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22708) Test fix for http transport

2020-01-10 Thread Naveen Gangam (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17013210#comment-17013210
 ] 

Naveen Gangam commented on HIVE-22708:
--

the lone test failure do not appear to be related to the fix. So +1 for me as 
well.

> Test fix for http transport
> ---
>
> Key: HIVE-22708
> URL: https://issues.apache.org/jira/browse/HIVE-22708
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 4.0.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
>Priority: Major
> Attachments: HIVE-22708.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22712) ReExec Driver execute submit the query in default queue irrespective of user defined queue

2020-01-10 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17013295#comment-17013295
 ] 

Hive QA commented on HIVE-22712:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
59s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
37s{color} | {color:red} ql: The patch generated 1 new + 1 unchanged - 0 fixed 
= 2 total (was 1) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m  7s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20150/dev-support/hive-personality.sh
 |
| git revision | master / dfcb0a4 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20150/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20150/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> ReExec Driver execute submit the query in default queue irrespective of user 
> defined queue
> --
>
> Key: HIVE-22712
> URL: https://issues.apache.org/jira/browse/HIVE-22712
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2
>Affects Versions: 3.1.2
> Environment: Hive-3
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-22712.01.patch, HIVE-22712.02.patch, 
> HIVE-22712.patch
>
>
> we unset the queue name intentionally in 
> TezSessionState#startSessionAndContainers, 
> as a result reexec create a new session in the default queue and create a 
> problem, its a cumbersome to add reexec.overlay.tez.queue.name at session 
> level.
> I could not find a better way of setting the queue name (I am open for the 
> suggestion here) since it can create a  conflict with the Global queue name 
> vs user-defined queue that's why setting while initialization of 
> ReExecutionOverlayPlugin.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22261) Add tests for materialized view rewriting with window functions

2020-01-10 Thread Jesus Camacho Rodriguez (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-22261:
---
Fix Version/s: 4.0.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Add tests for materialized view rewriting with window functions
> ---
>
> Key: HIVE-22261
> URL: https://issues.apache.org/jira/browse/HIVE-22261
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO, Materialized views, Tests
>Affects Versions: 3.1.2
>Reporter: Steve Carlin
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22261.patch, af2.sql
>
>
> Materialized views don't support window functions.  At a minimum, we should 
> print a friendlier message when the rewrite fails (it can still be created 
> with a "disable rewrite")
> Script is attached
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22712) ReExec Driver execute submit the query in default queue irrespective of user defined queue

2020-01-10 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17013318#comment-17013318
 ] 

Hive QA commented on HIVE-22712:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12990614/HIVE-22712.02.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17872 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20150/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20150/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20150/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12990614 - PreCommit-HIVE-Build

> ReExec Driver execute submit the query in default queue irrespective of user 
> defined queue
> --
>
> Key: HIVE-22712
> URL: https://issues.apache.org/jira/browse/HIVE-22712
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2
>Affects Versions: 3.1.2
> Environment: Hive-3
>Reporter: Rajkumar Singh
>Assignee: Rajkumar Singh
>Priority: Major
> Attachments: HIVE-22712.01.patch, HIVE-22712.02.patch, 
> HIVE-22712.patch
>
>
> we unset the queue name intentionally in 
> TezSessionState#startSessionAndContainers, 
> as a result reexec create a new session in the default queue and create a 
> problem, its a cumbersome to add reexec.overlay.tez.queue.name at session 
> level.
> I could not find a better way of setting the queue name (I am open for the 
> suggestion here) since it can create a  conflict with the Global queue name 
> vs user-defined queue that's why setting while initialization of 
> ReExecutionOverlayPlugin.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22714) TestScheduledQueryService is flaky

2020-01-10 Thread Jason Dere (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-22714:
--
Fix Version/s: 4.0.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to master

> TestScheduledQueryService is flaky
> --
>
> Key: HIVE-22714
> URL: https://issues.apache.org/jira/browse/HIVE-22714
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22714.1.patch
>
>
> {noformat}
> [ERROR] Failures: 
> [ERROR]   TestScheduledQueryService.testScheduledQueryExecution:152 
> Expected: <5>
>  but: was <0>
> [INFO] 
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0
> {noformat}
> Looks like sometimes we are not waiting long enough for the INSERT query to 
> complete and the SELECT runs before it finishes:
> {noformat}
> $ egrep "insert|select" 
> target/surefire-reports/org.apache.hadoop.hive.ql.schq.TestScheduledQueryService-output.txt
>  | grep HOOK
> PREHOOK: query: insert into tu values(1),(2),(3),(4),(5)
> 2020-01-09T14:49:09,497  INFO [SchQ 0] SessionState: PREHOOK: query: insert 
> into tu values(1),(2),(3),(4),(5)
> PREHOOK: query: select 1 from tu
> 2020-01-09T14:49:11,452  INFO [main] SessionState: PREHOOK: query: select 1 
> from tu
> POSTHOOK: query: select 1 from tu
> 2020-01-09T14:49:11,452  INFO [main] SessionState: POSTHOOK: query: select 1 
> from tu
> POSTHOOK: query: insert into tu values(1),(2),(3),(4),(5)
> 2020-01-09T14:49:12,062  INFO [SchQ 0] SessionState: POSTHOOK: query: insert 
> into tu values(1),(2),(3),(4),(5)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22680) Replace Base64 in druid-handler Package

2020-01-10 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22680?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HIVE-22680:
--
Attachment: (was: HIVE-22680.1.patch)

> Replace Base64 in druid-handler Package
> ---
>
> Key: HIVE-22680
> URL: https://issues.apache.org/jira/browse/HIVE-22680
> Project: Hive
>  Issue Type: Sub-task
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Attachments: HIVE-22680.1.patch, HIVE-22680.1.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22716) Reading to ByteBuffer is broken in ParquetFooterInputFromCache

2020-01-10 Thread Marta Kuczora (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marta Kuczora updated HIVE-22716:
-
Description: 
The ParquetFooterInputFromCache.read(ByteBuffer bb) calls the readInternal 
method with the result parameter passed as 'len'. The value of the result 
parameter will always be -1 at this point, and because of this, the 
readInternal method won't read anything.
{noformat}
  public int read(ByteBuffer bb) throws IOException {
// Simple implementation for now - currently Parquet uses heap buffers.
int result = -1;
if (bb.hasArray()) {
  result = readInternal(bb.array(), bb.arrayOffset(), result);  // The 
readInternal is called with result=-1
  if (result > 0) {
bb.position(bb.position() + result);
  }
} else {
  byte[] b = new byte[bb.remaining()];
  result = readInternal(b, 0, result); // The readInternal is called 
with result=-1
  bb.put(b, 0, result);
}
return result;
  }
{noformat}
{noformat}
  public int readInternal(byte[] b, int offset, int len) {
if (position >= length) return -1;
int argPos = offset, argEnd = offset + len;  // Here argEnd will be -1
while (argPos < argEnd) { // This condition will never be true, 
since argEnd=-1
  if (bufferIx == cacheData.length) return (argPos - offset);
  ByteBuffer data = cacheData[bufferIx].getByteBufferDup();
  int toConsume = Math.min(argEnd - argPos, data.remaining() - bufferPos);
  data.position(data.position() + bufferPos);
  data.get(b, argPos, toConsume);
  if (data.remaining() == 0) {
++bufferIx;
bufferPos = 0;
  } else {
bufferPos += toConsume;
  }
  argPos += toConsume;
}
return len;
  }
{noformat}
The read(ByteBuffer bb) method wasn't called before, but in the 1.11.0 Parquet 
version, there were some optimizations 
(PARQUET-1542|https://issues.apache.org/jira/browse/PARQUET-1542), so this 
method is called now. This bug causes the TestMiniLlapCliDriver and 
TestMiniLlapLocalCliDriver q tests failing with the new Parquet version.

  was:
The ParquetFooterInputFromCache.read(ByteBuffer bb) calls the readInternal 
method with the result parameter passed as 'len'. The value of the result 
parameter will always be -1 at this point, and because of this, the 
readInternal method won't read anything.
{noformat}
  public int read(ByteBuffer bb) throws IOException {
// Simple implementation for now - currently Parquet uses heap buffers.
int result = -1;
if (bb.hasArray()) {
  result = readInternal(bb.array(), bb.arrayOffset(), result); // The 
readInternal is called with result=-1
  if (result > 0) {
bb.position(bb.position() + result);
  }
} else {
  byte[] b = new byte[bb.remaining()];
  result = readInternal(b, 0, result);  
  // The readInternal is called with result=-1
  bb.put(b, 0, result);
}
return result;
  }
{noformat}
{noformat}
  public int readInternal(byte[] b, int offset, int len) {
if (position >= length) return -1;
int argPos = offset, argEnd = offset + len;  // Here argEnd will be -1
while (argPos < argEnd) { // This condition 
will never be true, since argEnd=-1
  if (bufferIx == cacheData.length) return (argPos - offset);
  ByteBuffer data = cacheData[bufferIx].getByteBufferDup();
  int toConsume = Math.min(argEnd - argPos, data.remaining() - bufferPos);
  data.position(data.position() + bufferPos);
  data.get(b, argPos, toConsume);
  if (data.remaining() == 0) {
++bufferIx;
bufferPos = 0;
  } else {
bufferPos += toConsume;
  }
  argPos += toConsume;
}
return len;
  }
{noformat}
The read(ByteBuffer bb) method wasn't called before, but in the 1.11.0 Parquet 
version, there were some optimizations 
(PARQUET-1542|https://issues.apache.org/jira/browse/PARQUET-1542), so this 
method is called now. This bug causes the TestMiniLlapCliDriver and 
TestMiniLlapLocalCliDriver q tests failing with the new Parquet version.


> Reading to ByteBuffer is broken in ParquetFooterInputFromCache
> --
>
> Key: HIVE-22716
> URL: https://issues.apache.org/jira/browse/HIVE-22716
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
>
> The ParquetFooterInputFromCache.read(ByteBuffer bb) calls the readInternal 
> method with the result parameter passed as 'len'. The value of the result 
> parameter will always be -1 at this point, and because of this, the 
> readInternal method won't read anything.
> {noformat}
>   public int 

[jira] [Updated] (HIVE-22716) Reading to ByteBuffer is broken in ParquetFooterInputFromCache

2020-01-10 Thread Marta Kuczora (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marta Kuczora updated HIVE-22716:
-
Description: 
The ParquetFooterInputFromCache.read(ByteBuffer bb) calls the readInternal 
method with the result parameter passed as 'len'. The value of the result 
parameter will always be -1 at this point, and because of this, the 
readInternal method won't read anything.
{noformat}
  public int read(ByteBuffer bb) throws IOException {
// Simple implementation for now - currently Parquet uses heap buffers.
int result = -1;
if (bb.hasArray()) {
  result = readInternal(bb.array(), bb.arrayOffset(), result); // The 
readInternal is called with result=-1
  if (result > 0) {
bb.position(bb.position() + result);
  }
} else {
  byte[] b = new byte[bb.remaining()];
  result = readInternal(b, 0, result);  
  // The readInternal is called with result=-1
  bb.put(b, 0, result);
}
return result;
  }
{noformat}
{noformat}
  public int readInternal(byte[] b, int offset, int len) {
if (position >= length) return -1;
int argPos = offset, argEnd = offset + len;  // Here argEnd will be -1
while (argPos < argEnd) { // This condition 
will never be true, since argEnd=-1
  if (bufferIx == cacheData.length) return (argPos - offset);
  ByteBuffer data = cacheData[bufferIx].getByteBufferDup();
  int toConsume = Math.min(argEnd - argPos, data.remaining() - bufferPos);
  data.position(data.position() + bufferPos);
  data.get(b, argPos, toConsume);
  if (data.remaining() == 0) {
++bufferIx;
bufferPos = 0;
  } else {
bufferPos += toConsume;
  }
  argPos += toConsume;
}
return len;
  }
{noformat}
The read(ByteBuffer bb) method wasn't called before, but in the 1.11.0 Parquet 
version, there were some optimizations 
(PARQUET-1542|https://issues.apache.org/jira/browse/PARQUET-1542), so this 
method is called now. This bug causes the TestMiniLlapCliDriver and 
TestMiniLlapLocalCliDriver q tests failing with the new Parquet version.

> Reading to ByteBuffer is broken in ParquetFooterInputFromCache
> --
>
> Key: HIVE-22716
> URL: https://issues.apache.org/jira/browse/HIVE-22716
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
>
> The ParquetFooterInputFromCache.read(ByteBuffer bb) calls the readInternal 
> method with the result parameter passed as 'len'. The value of the result 
> parameter will always be -1 at this point, and because of this, the 
> readInternal method won't read anything.
> {noformat}
>   public int read(ByteBuffer bb) throws IOException {
> // Simple implementation for now - currently Parquet uses heap buffers.
> int result = -1;
> if (bb.hasArray()) {
>   result = readInternal(bb.array(), bb.arrayOffset(), result); // The 
> readInternal is called with result=-1
>   if (result > 0) {
> bb.position(bb.position() + result);
>   }
> } else {
>   byte[] b = new byte[bb.remaining()];
>   result = readInternal(b, 0, result);
> // The readInternal is called with result=-1
>   bb.put(b, 0, result);
> }
> return result;
>   }
> {noformat}
> {noformat}
>   public int readInternal(byte[] b, int offset, int len) {
> if (position >= length) return -1;
> int argPos = offset, argEnd = offset + len;  // Here argEnd will be -1
> while (argPos < argEnd) { // This 
> condition will never be true, since argEnd=-1
>   if (bufferIx == cacheData.length) return (argPos - offset);
>   ByteBuffer data = cacheData[bufferIx].getByteBufferDup();
>   int toConsume = Math.min(argEnd - argPos, data.remaining() - bufferPos);
>   data.position(data.position() + bufferPos);
>   data.get(b, argPos, toConsume);
>   if (data.remaining() == 0) {
> ++bufferIx;
> bufferPos = 0;
>   } else {
> bufferPos += toConsume;
>   }
>   argPos += toConsume;
> }
> return len;
>   }
> {noformat}
> The read(ByteBuffer bb) method wasn't called before, but in the 1.11.0 
> Parquet version, there were some optimizations 
> (PARQUET-1542|https://issues.apache.org/jira/browse/PARQUET-1542), so this 
> method is called now. This bug causes the TestMiniLlapCliDriver and 
> TestMiniLlapLocalCliDriver q tests failing with the new Parquet version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22713) Constant propagation shouldn't be done for Join-Fil(*)-RS structure

2020-01-10 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012698#comment-17012698
 ] 

Hive QA commented on HIVE-22713:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
7s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20141/dev-support/hive-personality.sh
 |
| git revision | master / f8e583f |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql itests U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20141/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Constant propagation shouldn't be done for Join-Fil(*)-RS structure
> ---
>
> Key: HIVE-22713
> URL: https://issues.apache.org/jira/browse/HIVE-22713
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-22713.1.patch, HIVE-22713.2.patch
>
>
> Constant propagation shouldn't be done for Join-Fil(*)-RS structure too. 
> Since we output columns from the join if the structure is Join-Fil(*)-RS, the 
> expressions shouldn't be modified.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-20934) ACID: Query based compactor for minor compaction

2020-01-10 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-20934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012779#comment-17012779
 ] 

Hive QA commented on HIVE-20934:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
5s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
43s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} ql: The patch generated 0 new + 408 unchanged - 4 
fixed = 408 total (was 412) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
19s{color} | {color:red} itests/hive-unit: The patch generated 1 new + 138 
unchanged - 25 fixed = 139 total (was 163) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20142/dev-support/hive-personality.sh
 |
| git revision | master / f8e583f |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20142/yetus/diff-checkstyle-itests_hive-unit.txt
 |
| modules | C: ql itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20142/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> ACID: Query based compactor for minor compaction
> 
>
> Key: HIVE-20934
> URL: https://issues.apache.org/jira/browse/HIVE-20934
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Vaibhav Gumashta
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-20934.01.patch, HIVE-20934.02.patch, 
> HIVE-20934.03.patch, HIVE-20934.04.patch, HIVE-20934.05.patch, 
> HIVE-20934.06.patch, HIVE-20934.07.patch, HIVE-20934.08.patch, 
> HIVE-20934.09.patch, HIVE-20934.10.patch, HIVE-20934.11.patch, 
> HIVE-20934.12.patch, HIVE-20934.13.patch, HIVE-20934.14.patch
>
>
> Follow up of HIVE-20699. This is to enable running minor compactions as a 
> HiveQL query 

[jira] [Commented] (HIVE-22648) Upgrade Parquet to 1.11.0

2020-01-10 Thread Marta Kuczora (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012649#comment-17012649
 ] 

Marta Kuczora commented on HIVE-22648:
--

The TestMiniLlapCliDriver and TestMiniLlapLocalCliDriver are failing because of 
a bug in 

ParquetFooterInputFromCache. HIVE-22716

> Upgrade Parquet to 1.11.0
> -
>
> Key: HIVE-22648
> URL: https://issues.apache.org/jira/browse/HIVE-22648
> Project: Hive
>  Issue Type: Improvement
>  Components: File Formats
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
>  Labels: Parquet, parquet
> Attachments: HIVE-22648.1.patch
>
>
> Upgrade the Parquet version to 1.11.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-21215) Read Parquet INT64 timestamp

2020-01-10 Thread Marta Kuczora (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-21215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012650#comment-17012650
 ] 

Marta Kuczora commented on HIVE-21215:
--

Hi [~zhxjdwh], no this issue is not solved yet. The Parquet update is blocked 
by a recently found bug in ParquetFooterInputFromCache. (HIVE-22716). As soon 
as I get that fix in, we can go forward with upgrading the Parquet version and 
then with this patch.

> Read Parquet INT64 timestamp
> 
>
> Key: HIVE-21215
> URL: https://issues.apache.org/jira/browse/HIVE-21215
> Project: Hive
>  Issue Type: New Feature
>Reporter: Karen Coppage
>Assignee: Marta Kuczora
>Priority: Major
>
> [WIP]
> This patch enables Hive to start reading timestamps from Parquet written with 
> the new semantics:
> With Parquet version 1.11, a new timestamp LogicalType with base INT64 and 
> the following metadata is introduced:
> * boolean isAdjustedToUtc: marks whether the timestamp is converted to UTC 
> (aka Instant semantics) or not (LocalDateTime semantics).
> * enum TimeUnit (NANOS, MICROS, MILLIS): granularity of timestamp
> Upon reading, the semantics of these new timestamps will be determined by 
> their metadata, while the semantics of INT96 timestamps will continue to be 
> deduced from the writer metadata.
> This feature will be behind a flag for now.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22706) Jdbc storage handler incorrectly interprets boolean column value in derby

2020-01-10 Thread Syed Shameerur Rahman (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012765#comment-17012765
 ] 

Syed Shameerur Rahman commented on HIVE-22706:
--

+1

> Jdbc storage handler incorrectly interprets boolean column value in derby
> -
>
> Key: HIVE-22706
> URL: https://issues.apache.org/jira/browse/HIVE-22706
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>
> in case the column value is false ; the storage handler interprets it as true



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22716) Reading to ByteBuffer is broken in ParquetFooterInputFromCache

2020-01-10 Thread Marta Kuczora (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marta Kuczora updated HIVE-22716:
-
Description: 
The ParquetFooterInputFromCache.read(ByteBuffer bb) calls the readInternal 
method with the result parameter passed as 'len'. The value of the result 
parameter will always be -1 at this point, and because of this, the 
readInternal method won't read anything.
{noformat}
  public int read(ByteBuffer bb) throws IOException {
// Simple implementation for now - currently Parquet uses heap buffers.
int result = -1;
if (bb.hasArray()) {
  result = readInternal(bb.array(), bb.arrayOffset(), result);  // The 
readInternal is called with result=-1
  if (result > 0) {
bb.position(bb.position() + result);
  }
} else {
  byte[] b = new byte[bb.remaining()];
  result = readInternal(b, 0, result); // The readInternal is called 
with result=-1
  bb.put(b, 0, result);
}
return result;
  }
{noformat}
{noformat}
  public int readInternal(byte[] b, int offset, int len) {
if (position >= length) return -1;
int argPos = offset, argEnd = offset + len;  // Here argEnd will be -1
while (argPos < argEnd) { // This condition will never be true, 
since argEnd=-1
  if (bufferIx == cacheData.length) return (argPos - offset);
  ByteBuffer data = cacheData[bufferIx].getByteBufferDup();
  int toConsume = Math.min(argEnd - argPos, data.remaining() - bufferPos);
  data.position(data.position() + bufferPos);
  data.get(b, argPos, toConsume);
  if (data.remaining() == 0) {
++bufferIx;
bufferPos = 0;
  } else {
bufferPos += toConsume;
  }
  argPos += toConsume;
}
return len;
  }
{noformat}
The read(ByteBuffer bb) method wasn't called before, but in the 1.11.0 Parquet 
version, there were some optimizations (PARQUET-1542), so this method is called 
now. This bug causes the TestMiniLlapCliDriver and TestMiniLlapLocalCliDriver q 
tests failing with the new Parquet version.

  was:
The ParquetFooterInputFromCache.read(ByteBuffer bb) calls the readInternal 
method with the result parameter passed as 'len'. The value of the result 
parameter will always be -1 at this point, and because of this, the 
readInternal method won't read anything.
{noformat}
  public int read(ByteBuffer bb) throws IOException {
// Simple implementation for now - currently Parquet uses heap buffers.
int result = -1;
if (bb.hasArray()) {
  result = readInternal(bb.array(), bb.arrayOffset(), result);  // The 
readInternal is called with result=-1
  if (result > 0) {
bb.position(bb.position() + result);
  }
} else {
  byte[] b = new byte[bb.remaining()];
  result = readInternal(b, 0, result); // The readInternal is called 
with result=-1
  bb.put(b, 0, result);
}
return result;
  }
{noformat}
{noformat}
  public int readInternal(byte[] b, int offset, int len) {
if (position >= length) return -1;
int argPos = offset, argEnd = offset + len;  // Here argEnd will be -1
while (argPos < argEnd) { // This condition will never be true, 
since argEnd=-1
  if (bufferIx == cacheData.length) return (argPos - offset);
  ByteBuffer data = cacheData[bufferIx].getByteBufferDup();
  int toConsume = Math.min(argEnd - argPos, data.remaining() - bufferPos);
  data.position(data.position() + bufferPos);
  data.get(b, argPos, toConsume);
  if (data.remaining() == 0) {
++bufferIx;
bufferPos = 0;
  } else {
bufferPos += toConsume;
  }
  argPos += toConsume;
}
return len;
  }
{noformat}
The read(ByteBuffer bb) method wasn't called before, but in the 1.11.0 Parquet 
version, there were some optimizations 
(PARQUET-1542|https://issues.apache.org/jira/browse/PARQUET-1542), so this 
method is called now. This bug causes the TestMiniLlapCliDriver and 
TestMiniLlapLocalCliDriver q tests failing with the new Parquet version.


> Reading to ByteBuffer is broken in ParquetFooterInputFromCache
> --
>
> Key: HIVE-22716
> URL: https://issues.apache.org/jira/browse/HIVE-22716
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
>
> The ParquetFooterInputFromCache.read(ByteBuffer bb) calls the readInternal 
> method with the result parameter passed as 'len'. The value of the result 
> parameter will always be -1 at this point, and because of this, the 
> readInternal method won't read anything.
> {noformat}
>   public int read(ByteBuffer bb) throws IOException {
> // Simple implementation for now - currently Parquet uses heap buffers.
> 

[jira] [Updated] (HIVE-22716) Reading to ByteBuffer is broken in ParquetFooterInputFromCache

2020-01-10 Thread Marta Kuczora (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marta Kuczora updated HIVE-22716:
-
Description: 
The ParquetFooterInputFromCache.read(ByteBuffer bb) calls the readInternal 
method with the result parameter passed as 'len'. The value of the result 
parameter will always be -1 at this point, and because of this, the 
readInternal method won't read anything.
{noformat}
  public int read(ByteBuffer bb) throws IOException {
// Simple implementation for now - currently Parquet uses heap buffers.
int result = -1;
if (bb.hasArray()) {
  result = readInternal(bb.array(), bb.arrayOffset(), result);  // The 
readInternal is called with result=-1
  if (result > 0) {
bb.position(bb.position() + result);
  }
} else {
  byte[] b = new byte[bb.remaining()];
  result = readInternal(b, 0, result); // The readInternal is called 
with result=-1
  bb.put(b, 0, result);
}
return result;
  }
{noformat}
{noformat}
  public int readInternal(byte[] b, int offset, int len) {
if (position >= length) return -1;
int argPos = offset, argEnd = offset + len;  // Here argEnd will be -1
while (argPos < argEnd) { // This condition will never be true, 
since argEnd=-1
  if (bufferIx == cacheData.length) return (argPos - offset);
  ByteBuffer data = cacheData[bufferIx].getByteBufferDup();
  int toConsume = Math.min(argEnd - argPos, data.remaining() - bufferPos);
  data.position(data.position() + bufferPos);
  data.get(b, argPos, toConsume);
  if (data.remaining() == 0) {
++bufferIx;
bufferPos = 0;
  } else {
bufferPos += toConsume;
  }
  argPos += toConsume;
}
return len;
  }
{noformat}
The read(ByteBuffer bb) method wasn't called before, but in the 1.11.0 Parquet 
version, there were some optimizations (PARQUET-1542), so this method is called 
now. Because of this bug, the TestMiniLlapCliDriver and 
TestMiniLlapLocalCliDriver q tests are failing with the new Parquet version.

  was:
The ParquetFooterInputFromCache.read(ByteBuffer bb) calls the readInternal 
method with the result parameter passed as 'len'. The value of the result 
parameter will always be -1 at this point, and because of this, the 
readInternal method won't read anything.
{noformat}
  public int read(ByteBuffer bb) throws IOException {
// Simple implementation for now - currently Parquet uses heap buffers.
int result = -1;
if (bb.hasArray()) {
  result = readInternal(bb.array(), bb.arrayOffset(), result);  // The 
readInternal is called with result=-1
  if (result > 0) {
bb.position(bb.position() + result);
  }
} else {
  byte[] b = new byte[bb.remaining()];
  result = readInternal(b, 0, result); // The readInternal is called 
with result=-1
  bb.put(b, 0, result);
}
return result;
  }
{noformat}
{noformat}
  public int readInternal(byte[] b, int offset, int len) {
if (position >= length) return -1;
int argPos = offset, argEnd = offset + len;  // Here argEnd will be -1
while (argPos < argEnd) { // This condition will never be true, 
since argEnd=-1
  if (bufferIx == cacheData.length) return (argPos - offset);
  ByteBuffer data = cacheData[bufferIx].getByteBufferDup();
  int toConsume = Math.min(argEnd - argPos, data.remaining() - bufferPos);
  data.position(data.position() + bufferPos);
  data.get(b, argPos, toConsume);
  if (data.remaining() == 0) {
++bufferIx;
bufferPos = 0;
  } else {
bufferPos += toConsume;
  }
  argPos += toConsume;
}
return len;
  }
{noformat}
The read(ByteBuffer bb) method wasn't called before, but in the 1.11.0 Parquet 
version, there were some optimizations (PARQUET-1542), so this method is called 
now. This bug causes the TestMiniLlapCliDriver and TestMiniLlapLocalCliDriver q 
tests failing with the new Parquet version.


> Reading to ByteBuffer is broken in ParquetFooterInputFromCache
> --
>
> Key: HIVE-22716
> URL: https://issues.apache.org/jira/browse/HIVE-22716
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
>
> The ParquetFooterInputFromCache.read(ByteBuffer bb) calls the readInternal 
> method with the result parameter passed as 'len'. The value of the result 
> parameter will always be -1 at this point, and because of this, the 
> readInternal method won't read anything.
> {noformat}
>   public int read(ByteBuffer bb) throws IOException {
> // Simple implementation for now - currently Parquet uses heap buffers.
> int result = -1;
> if (bb.hasArray()) {

[jira] [Updated] (HIVE-22716) Reading to ByteBuffer is broken in ParquetFooterInputFromCache

2020-01-10 Thread Marta Kuczora (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marta Kuczora updated HIVE-22716:
-
Status: Patch Available  (was: Open)

> Reading to ByteBuffer is broken in ParquetFooterInputFromCache
> --
>
> Key: HIVE-22716
> URL: https://issues.apache.org/jira/browse/HIVE-22716
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22716.1.patch
>
>
> The ParquetFooterInputFromCache.read(ByteBuffer bb) calls the readInternal 
> method with the result parameter passed as 'len'. The value of the result 
> parameter will always be -1 at this point, and because of this, the 
> readInternal method won't read anything.
> {noformat}
>   public int read(ByteBuffer bb) throws IOException {
> // Simple implementation for now - currently Parquet uses heap buffers.
> int result = -1;
> if (bb.hasArray()) {
>   result = readInternal(bb.array(), bb.arrayOffset(), result);  // The 
> readInternal is called with result=-1
>   if (result > 0) {
> bb.position(bb.position() + result);
>   }
> } else {
>   byte[] b = new byte[bb.remaining()];
>   result = readInternal(b, 0, result); // The readInternal is called 
> with result=-1
>   bb.put(b, 0, result);
> }
> return result;
>   }
> {noformat}
> {noformat}
>   public int readInternal(byte[] b, int offset, int len) {
> if (position >= length) return -1;
> int argPos = offset, argEnd = offset + len;  // Here argEnd will be -1
> while (argPos < argEnd) { // This condition will never be 
> true, since argEnd=-1
>   if (bufferIx == cacheData.length) return (argPos - offset);
>   ByteBuffer data = cacheData[bufferIx].getByteBufferDup();
>   int toConsume = Math.min(argEnd - argPos, data.remaining() - bufferPos);
>   data.position(data.position() + bufferPos);
>   data.get(b, argPos, toConsume);
>   if (data.remaining() == 0) {
> ++bufferIx;
> bufferPos = 0;
>   } else {
> bufferPos += toConsume;
>   }
>   argPos += toConsume;
> }
> return len;
>   }
> {noformat}
> The read(ByteBuffer bb) method wasn't called before, but in the 1.11.0 
> Parquet version, there were some optimizations (PARQUET-1542), so this method 
> is called now. Because of this bug, the TestMiniLlapCliDriver and 
> TestMiniLlapLocalCliDriver q tests are failing with the new Parquet version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22716) Reading to ByteBuffer is broken in ParquetFooterInputFromCache

2020-01-10 Thread Marta Kuczora (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marta Kuczora updated HIVE-22716:
-
Attachment: HIVE-22716.1.patch

> Reading to ByteBuffer is broken in ParquetFooterInputFromCache
> --
>
> Key: HIVE-22716
> URL: https://issues.apache.org/jira/browse/HIVE-22716
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22716.1.patch
>
>
> The ParquetFooterInputFromCache.read(ByteBuffer bb) calls the readInternal 
> method with the result parameter passed as 'len'. The value of the result 
> parameter will always be -1 at this point, and because of this, the 
> readInternal method won't read anything.
> {noformat}
>   public int read(ByteBuffer bb) throws IOException {
> // Simple implementation for now - currently Parquet uses heap buffers.
> int result = -1;
> if (bb.hasArray()) {
>   result = readInternal(bb.array(), bb.arrayOffset(), result);  // The 
> readInternal is called with result=-1
>   if (result > 0) {
> bb.position(bb.position() + result);
>   }
> } else {
>   byte[] b = new byte[bb.remaining()];
>   result = readInternal(b, 0, result); // The readInternal is called 
> with result=-1
>   bb.put(b, 0, result);
> }
> return result;
>   }
> {noformat}
> {noformat}
>   public int readInternal(byte[] b, int offset, int len) {
> if (position >= length) return -1;
> int argPos = offset, argEnd = offset + len;  // Here argEnd will be -1
> while (argPos < argEnd) { // This condition will never be 
> true, since argEnd=-1
>   if (bufferIx == cacheData.length) return (argPos - offset);
>   ByteBuffer data = cacheData[bufferIx].getByteBufferDup();
>   int toConsume = Math.min(argEnd - argPos, data.remaining() - bufferPos);
>   data.position(data.position() + bufferPos);
>   data.get(b, argPos, toConsume);
>   if (data.remaining() == 0) {
> ++bufferIx;
> bufferPos = 0;
>   } else {
> bufferPos += toConsume;
>   }
>   argPos += toConsume;
> }
> return len;
>   }
> {noformat}
> The read(ByteBuffer bb) method wasn't called before, but in the 1.11.0 
> Parquet version, there were some optimizations (PARQUET-1542), so this method 
> is called now. Because of this bug, the TestMiniLlapCliDriver and 
> TestMiniLlapLocalCliDriver q tests are failing with the new Parquet version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22510) Support decimal64 operations for column operands with different scales

2020-01-10 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012666#comment-17012666
 ] 

Hive QA commented on HIVE-22510:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12990491/HIVE-22510.19.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17862 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20140/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20140/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20140/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12990491 - PreCommit-HIVE-Build

> Support decimal64 operations for column operands with different scales
> --
>
> Key: HIVE-22510
> URL: https://issues.apache.org/jira/browse/HIVE-22510
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22510.11.patch, HIVE-22510.13.patch, 
> HIVE-22510.14.patch, HIVE-22510.15.patch, HIVE-22510.16.patch, 
> HIVE-22510.17.patch, HIVE-22510.18.patch, HIVE-22510.19.patch, 
> HIVE-22510.2.patch, HIVE-22510.3.patch, HIVE-22510.4.patch, 
> HIVE-22510.5.patch, HIVE-22510.7.patch, HIVE-22510.9.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Right now, if the operands on the decimal64 operations are columns with 
> different scales, then we do not use the decimal64 vectorized version and 
> fall back to HiveDecimal vectorized version of the operator. In this Jira, we 
> will check if we can use decimal64 vectorized version, even if the scales are 
> different.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22706) Jdbc storage handler incorrectly interprets boolean column value in derby

2020-01-10 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-22706:

Status: Patch Available  (was: Open)

> Jdbc storage handler incorrectly interprets boolean column value in derby
> -
>
> Key: HIVE-22706
> URL: https://issues.apache.org/jira/browse/HIVE-22706
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-22706.01.patch
>
>
> in case the column value is false ; the storage handler interprets it as true



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22706) Jdbc storage handler incorrectly interprets boolean column value in derby

2020-01-10 Thread Zoltan Haindrich (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-22706:

Attachment: HIVE-22706.01.patch

> Jdbc storage handler incorrectly interprets boolean column value in derby
> -
>
> Key: HIVE-22706
> URL: https://issues.apache.org/jira/browse/HIVE-22706
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-22706.01.patch
>
>
> in case the column value is false ; the storage handler interprets it as true



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22663) Quote all table and column names or do not quote any

2020-01-10 Thread Zoltan Chovan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012632#comment-17012632
 ] 

Zoltan Chovan commented on HIVE-22663:
--

[~pvary] could you review?

> Quote all table and column names or do not quote any
> 
>
> Key: HIVE-22663
> URL: https://issues.apache.org/jira/browse/HIVE-22663
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, Standalone Metastore
>Affects Versions: 4.0.0
>Reporter: Ashutosh Bapat
>Assignee: Zoltan Chovan
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22663.2.patch, HIVE-22663.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The change in HIVE-22546 is causing following stack trace when I run Hive 
> with PostgreSQL as backend db for the metastore.
> 0: jdbc:hive2://localhost:1> create database dumpdb with 
> ('repl.source.for'='1,2,3');0: jdbc:hive2://localhost:1> create database 
> dumpdb with ('repl.source.for'='1,2,3');Error: Error while compiling 
> statement: FAILED: ParseException line 1:28 missing KW_DBPROPERTIES at '(' 
> near '' (state=42000,code=4)0: jdbc:hive2://localhost:1> create 
> database dumpdb with dbproperties ('repl.source.for'='1,2,3');ERROR : FAILED: 
> Hive Internal Error: org.apache.hadoop.hive.ql.lockmgr.LockException(Error 
> communicating with the 
> metastore)org.apache.hadoop.hive.ql.lockmgr.LockException: Error 
> communicating with the metastore at 
> org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.commitTxn(DbTxnManager.java:541)
>  at 
> org.apache.hadoop.hive.ql.Driver.releaseLocksAndCommitOrRollback(Driver.java:687)
>  at 
> org.apache.hadoop.hive.ql.Driver.releaseLocksAndCommitOrRollback(Driver.java:653)
>  at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:969)
> ... stack trace clipped
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)Caused by: 
> MetaException(message:Unable to update transaction database 
> org.postgresql.util.PSQLException: ERROR: relation 
> "materialization_rebuild_locks" does not exist  Position: 13 at 
> org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2440)
>  at 
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2183)
>  at 
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:308) 
> at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441) at 
> org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365) at 
> This happens because the table names in all the queries in TxnHandler.java 
> (including the one at 1312, which causes this stack trace) are not quoting 
> the table names. All the tablenames and column names should be quoted there. 
> Just the change in HIVE-22546 won't suffice.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22706) Jdbc storage handler incorrectly interprets boolean column value in derby

2020-01-10 Thread Zoltan Haindrich (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012727#comment-17012727
 ] 

Zoltan Haindrich commented on HIVE-22706:
-

well; after some digging my hypothesis have confirmed; that datanucleus is 
pulling these Y/N into play...the translation is done at several places 
including 
[this|https://github.com/datanucleus/datanucleus-rdbms/blob/6f01a33e7d514f90775ddd7d3d7bfe77303e787b/src/main/java/org/datanucleus/store/rdbms/mapping/datastore/CharRDBMSMapping.java#L332].

I played with the idea to try to reuse the parts which are doing this kind of 
conversions ; but they are kinda deeply tied to a plugin system inside 
datanucleus and it would look strange

So I think the best course of action ; to have our sysdb show rational values 
is to interpret the char 'N' as false...

> Jdbc storage handler incorrectly interprets boolean column value in derby
> -
>
> Key: HIVE-22706
> URL: https://issues.apache.org/jira/browse/HIVE-22706
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>
> in case the column value is false ; the storage handler interprets it as true



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22713) Constant propagation shouldn't be done for Join-Fil(*)-RS structure

2020-01-10 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012751#comment-17012751
 ] 

Hive QA commented on HIVE-22713:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12990492/HIVE-22713.2.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17861 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20141/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20141/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20141/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12990492 - PreCommit-HIVE-Build

> Constant propagation shouldn't be done for Join-Fil(*)-RS structure
> ---
>
> Key: HIVE-22713
> URL: https://issues.apache.org/jira/browse/HIVE-22713
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-22713.1.patch, HIVE-22713.2.patch
>
>
> Constant propagation shouldn't be done for Join-Fil(*)-RS structure too. 
> Since we output columns from the join if the structure is Join-Fil(*)-RS, the 
> expressions shouldn't be modified.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-20934) ACID: Query based compactor for minor compaction

2020-01-10 Thread Jira


[ 
https://issues.apache.org/jira/browse/HIVE-20934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012843#comment-17012843
 ] 

László Bodor commented on HIVE-20934:
-

green run, [~pvary]'s +1 is on review board, committing this to master

> ACID: Query based compactor for minor compaction
> 
>
> Key: HIVE-20934
> URL: https://issues.apache.org/jira/browse/HIVE-20934
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Vaibhav Gumashta
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-20934.01.patch, HIVE-20934.02.patch, 
> HIVE-20934.03.patch, HIVE-20934.04.patch, HIVE-20934.05.patch, 
> HIVE-20934.06.patch, HIVE-20934.07.patch, HIVE-20934.08.patch, 
> HIVE-20934.09.patch, HIVE-20934.10.patch, HIVE-20934.11.patch, 
> HIVE-20934.12.patch, HIVE-20934.13.patch, HIVE-20934.14.patch
>
>
> Follow up of HIVE-20699. This is to enable running minor compactions as a 
> HiveQL query 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-20934) ACID: Query based compactor for minor compaction

2020-01-10 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-20934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

László Bodor updated HIVE-20934:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> ACID: Query based compactor for minor compaction
> 
>
> Key: HIVE-20934
> URL: https://issues.apache.org/jira/browse/HIVE-20934
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Vaibhav Gumashta
>Assignee: Laszlo Pinter
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20934.01.patch, HIVE-20934.02.patch, 
> HIVE-20934.03.patch, HIVE-20934.04.patch, HIVE-20934.05.patch, 
> HIVE-20934.06.patch, HIVE-20934.07.patch, HIVE-20934.08.patch, 
> HIVE-20934.09.patch, HIVE-20934.10.patch, HIVE-20934.11.patch, 
> HIVE-20934.12.patch, HIVE-20934.13.patch, HIVE-20934.14.patch
>
>
> Follow up of HIVE-20699. This is to enable running minor compactions as a 
> HiveQL query 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-20934) ACID: Query based compactor for minor compaction

2020-01-10 Thread Jira


[ 
https://issues.apache.org/jira/browse/HIVE-20934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012855#comment-17012855
 ] 

László Bodor commented on HIVE-20934:
-

pushed to master, thanks for the patch [~lpinter] and for the review [~pvary]!

> ACID: Query based compactor for minor compaction
> 
>
> Key: HIVE-20934
> URL: https://issues.apache.org/jira/browse/HIVE-20934
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Vaibhav Gumashta
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-20934.01.patch, HIVE-20934.02.patch, 
> HIVE-20934.03.patch, HIVE-20934.04.patch, HIVE-20934.05.patch, 
> HIVE-20934.06.patch, HIVE-20934.07.patch, HIVE-20934.08.patch, 
> HIVE-20934.09.patch, HIVE-20934.10.patch, HIVE-20934.11.patch, 
> HIVE-20934.12.patch, HIVE-20934.13.patch, HIVE-20934.14.patch
>
>
> Follow up of HIVE-20699. This is to enable running minor compactions as a 
> HiveQL query 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-20934) ACID: Query based compactor for minor compaction

2020-01-10 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HIVE-20934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

László Bodor updated HIVE-20934:

Fix Version/s: 4.0.0

> ACID: Query based compactor for minor compaction
> 
>
> Key: HIVE-20934
> URL: https://issues.apache.org/jira/browse/HIVE-20934
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Vaibhav Gumashta
>Assignee: Laszlo Pinter
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-20934.01.patch, HIVE-20934.02.patch, 
> HIVE-20934.03.patch, HIVE-20934.04.patch, HIVE-20934.05.patch, 
> HIVE-20934.06.patch, HIVE-20934.07.patch, HIVE-20934.08.patch, 
> HIVE-20934.09.patch, HIVE-20934.10.patch, HIVE-20934.11.patch, 
> HIVE-20934.12.patch, HIVE-20934.13.patch, HIVE-20934.14.patch
>
>
> Follow up of HIVE-20699. This is to enable running minor compactions as a 
> HiveQL query 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22716) Reading to ByteBuffer is broken in ParquetFooterInputFromCache

2020-01-10 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012869#comment-17012869
 ] 

Hive QA commented on HIVE-22716:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
11s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20143/dev-support/hive-personality.sh
 |
| git revision | master / f8e583f |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20143/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Reading to ByteBuffer is broken in ParquetFooterInputFromCache
> --
>
> Key: HIVE-22716
> URL: https://issues.apache.org/jira/browse/HIVE-22716
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22716.1.patch
>
>
> The ParquetFooterInputFromCache.read(ByteBuffer bb) calls the readInternal 
> method with the result parameter passed as 'len'. The value of the result 
> parameter will always be -1 at this point, and because of this, the 
> readInternal method won't read anything.
> {noformat}
>   public int read(ByteBuffer bb) throws IOException {
> // Simple implementation for now - currently Parquet uses heap buffers.
> int result = -1;
> if (bb.hasArray()) {
>   result = readInternal(bb.array(), bb.arrayOffset(), result);  // The 
> readInternal is called with result=-1
>   if (result > 0) {
> bb.position(bb.position() + result);
>   }
> } else {
>   byte[] b = new byte[bb.remaining()];
>   result = readInternal(b, 0, result); // The readInternal is called 
> with result=-1
>   bb.put(b, 0, result);
> }
> return result;
>   }
> {noformat}
> {noformat}
>   public int readInternal(byte[] b, int offset, int len) {
> if (position >= length) return -1;
> int argPos = offset, argEnd = offset + len;  // Here argEnd will be -1
> while (argPos < argEnd) { // This condition will never be 
> true, since argEnd=-1
>   if (bufferIx == 

[jira] [Commented] (HIVE-22716) Reading to ByteBuffer is broken in ParquetFooterInputFromCache

2020-01-10 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012916#comment-17012916
 ] 

Hive QA commented on HIVE-22716:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12990503/HIVE-22716.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17860 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.TestTxnCommands.testMergeOnTezEdges (batchId=358)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20143/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20143/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20143/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12990503 - PreCommit-HIVE-Build

> Reading to ByteBuffer is broken in ParquetFooterInputFromCache
> --
>
> Key: HIVE-22716
> URL: https://issues.apache.org/jira/browse/HIVE-22716
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22716.1.patch
>
>
> The ParquetFooterInputFromCache.read(ByteBuffer bb) calls the readInternal 
> method with the result parameter passed as 'len'. The value of the result 
> parameter will always be -1 at this point, and because of this, the 
> readInternal method won't read anything.
> {noformat}
>   public int read(ByteBuffer bb) throws IOException {
> // Simple implementation for now - currently Parquet uses heap buffers.
> int result = -1;
> if (bb.hasArray()) {
>   result = readInternal(bb.array(), bb.arrayOffset(), result);  // The 
> readInternal is called with result=-1
>   if (result > 0) {
> bb.position(bb.position() + result);
>   }
> } else {
>   byte[] b = new byte[bb.remaining()];
>   result = readInternal(b, 0, result); // The readInternal is called 
> with result=-1
>   bb.put(b, 0, result);
> }
> return result;
>   }
> {noformat}
> {noformat}
>   public int readInternal(byte[] b, int offset, int len) {
> if (position >= length) return -1;
> int argPos = offset, argEnd = offset + len;  // Here argEnd will be -1
> while (argPos < argEnd) { // This condition will never be 
> true, since argEnd=-1
>   if (bufferIx == cacheData.length) return (argPos - offset);
>   ByteBuffer data = cacheData[bufferIx].getByteBufferDup();
>   int toConsume = Math.min(argEnd - argPos, data.remaining() - bufferPos);
>   data.position(data.position() + bufferPos);
>   data.get(b, argPos, toConsume);
>   if (data.remaining() == 0) {
> ++bufferIx;
> bufferPos = 0;
>   } else {
> bufferPos += toConsume;
>   }
>   argPos += toConsume;
> }
> return len;
>   }
> {noformat}
> The read(ByteBuffer bb) method wasn't called before, but in the 1.11.0 
> Parquet version, there were some optimizations (PARQUET-1542), so this method 
> is called now. Because of this bug, the TestMiniLlapCliDriver and 
> TestMiniLlapLocalCliDriver q tests are failing with the new Parquet version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22653) Remove commons-lang leftovers

2020-01-10 Thread David Lavati (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Lavati updated HIVE-22653:

Attachment: HIVE-22653.04.patch

> Remove commons-lang leftovers
> -
>
> Key: HIVE-22653
> URL: https://issues.apache.org/jira/browse/HIVE-22653
> Project: Hive
>  Issue Type: Bug
>Reporter: David Lavati
>Assignee: David Lavati
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-22653.01.patch, HIVE-22653.01.patch, 
> HIVE-22653.02.patch, HIVE-22653.03.patch, HIVE-22653.04.patch, 
> HIVE-22653.04.patch, HIVE-22653.04.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> HIVE-7145 removed commons-lang - in favor of commons-lang3 - as a direct 
> dependency, however a high number of files still refer to commons-lang, which 
> is transitively brought in either way.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-20934) ACID: Query based compactor for minor compaction

2020-01-10 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-20934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012833#comment-17012833
 ] 

Hive QA commented on HIVE-20934:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12990494/HIVE-20934.14.patch

{color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 17871 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20142/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20142/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20142/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12990494 - PreCommit-HIVE-Build

> ACID: Query based compactor for minor compaction
> 
>
> Key: HIVE-20934
> URL: https://issues.apache.org/jira/browse/HIVE-20934
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Vaibhav Gumashta
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-20934.01.patch, HIVE-20934.02.patch, 
> HIVE-20934.03.patch, HIVE-20934.04.patch, HIVE-20934.05.patch, 
> HIVE-20934.06.patch, HIVE-20934.07.patch, HIVE-20934.08.patch, 
> HIVE-20934.09.patch, HIVE-20934.10.patch, HIVE-20934.11.patch, 
> HIVE-20934.12.patch, HIVE-20934.13.patch, HIVE-20934.14.patch
>
>
> Follow up of HIVE-20699. This is to enable running minor compactions as a 
> HiveQL query 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22706) Jdbc storage handler incorrectly interprets boolean column value in derby

2020-01-10 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012997#comment-17012997
 ] 

Hive QA commented on HIVE-22706:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12990514/HIVE-22706.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17871 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[cbo_query23] 
(batchId=303)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20144/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20144/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20144/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12990514 - PreCommit-HIVE-Build

> Jdbc storage handler incorrectly interprets boolean column value in derby
> -
>
> Key: HIVE-22706
> URL: https://issues.apache.org/jira/browse/HIVE-22706
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-22706.01.patch
>
>
> in case the column value is false ; the storage handler interprets it as true



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22706) Jdbc storage handler incorrectly interprets boolean column value in derby

2020-01-10 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012943#comment-17012943
 ] 

Hive QA commented on HIVE-22706:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
45s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
1s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
24s{color} | {color:blue} jdbc-handler in master has 11 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m  
9s{color} | {color:red} jdbc-handler: The patch generated 5 new + 40 unchanged 
- 1 fixed = 45 total (was 41) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20144/dev-support/hive-personality.sh
 |
| git revision | master / 5aa5d74 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20144/yetus/diff-checkstyle-jdbc-handler.txt
 |
| modules | C: ql jdbc-handler U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20144/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Jdbc storage handler incorrectly interprets boolean column value in derby
> -
>
> Key: HIVE-22706
> URL: https://issues.apache.org/jira/browse/HIVE-22706
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-22706.01.patch
>
>
> in case the column value is false ; the storage handler interprets it as true



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22716) Reading to ByteBuffer is broken in ParquetFooterInputFromCache

2020-01-10 Thread Marta Kuczora (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marta Kuczora updated HIVE-22716:
-
Attachment: HIVE-22716.2.patch

> Reading to ByteBuffer is broken in ParquetFooterInputFromCache
> --
>
> Key: HIVE-22716
> URL: https://issues.apache.org/jira/browse/HIVE-22716
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22716.1.patch, HIVE-22716.2.patch
>
>
> The ParquetFooterInputFromCache.read(ByteBuffer bb) calls the readInternal 
> method with the result parameter passed as 'len'. The value of the result 
> parameter will always be -1 at this point, and because of this, the 
> readInternal method won't read anything.
> {noformat}
>   public int read(ByteBuffer bb) throws IOException {
> // Simple implementation for now - currently Parquet uses heap buffers.
> int result = -1;
> if (bb.hasArray()) {
>   result = readInternal(bb.array(), bb.arrayOffset(), result);  // The 
> readInternal is called with result=-1
>   if (result > 0) {
> bb.position(bb.position() + result);
>   }
> } else {
>   byte[] b = new byte[bb.remaining()];
>   result = readInternal(b, 0, result); // The readInternal is called 
> with result=-1
>   bb.put(b, 0, result);
> }
> return result;
>   }
> {noformat}
> {noformat}
>   public int readInternal(byte[] b, int offset, int len) {
> if (position >= length) return -1;
> int argPos = offset, argEnd = offset + len;  // Here argEnd will be -1
> while (argPos < argEnd) { // This condition will never be 
> true, since argEnd=-1
>   if (bufferIx == cacheData.length) return (argPos - offset);
>   ByteBuffer data = cacheData[bufferIx].getByteBufferDup();
>   int toConsume = Math.min(argEnd - argPos, data.remaining() - bufferPos);
>   data.position(data.position() + bufferPos);
>   data.get(b, argPos, toConsume);
>   if (data.remaining() == 0) {
> ++bufferIx;
> bufferPos = 0;
>   } else {
> bufferPos += toConsume;
>   }
>   argPos += toConsume;
> }
> return len;
>   }
> {noformat}
> The read(ByteBuffer bb) method wasn't called before, but in the 1.11.0 
> Parquet version, there were some optimizations (PARQUET-1542), so this method 
> is called now. Because of this bug, the TestMiniLlapCliDriver and 
> TestMiniLlapLocalCliDriver q tests are failing with the new Parquet version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22653) Remove commons-lang leftovers

2020-01-10 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17013001#comment-17013001
 ] 

Hive QA commented on HIVE-22653:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12990515/HIVE-22653.04.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20145/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20145/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20145/

Messages:
{noformat}
 This message was trimmed, see log for full details 
error: a/itests/util/src/main/java/org/apache/hadoop/hive/ql/QTestUtil.java: 
does not exist in index
error: 
a/itests/util/src/main/java/org/apache/hadoop/hive/ql/hooks/CheckColumnAccessHook.java:
 does not exist in index
error: 
a/itests/util/src/main/java/org/apache/hadoop/hive/ql/hooks/CheckTableAccessHook.java:
 does not exist in index
error: 
a/itests/util/src/main/java/org/apache/hadoop/hive/ql/hooks/VerifySessionStateStackTracesHook.java:
 does not exist in index
error: a/itests/util/src/main/java/org/apache/hive/beeline/QFile.java: does not 
exist in index
error: a/jdbc/src/java/org/apache/hive/jdbc/HiveConnection.java: does not exist 
in index
error: a/jdbc/src/java/org/apache/hive/jdbc/HiveStatement.java: does not exist 
in index
error: 
a/llap-common/src/java/org/apache/hadoop/hive/llap/security/LlapTokenIdentifier.java:
 does not exist in index
error: a/llap-common/src/test/org/apache/hadoop/hive/llap/TestRow.java: does 
not exist in index
error: 
a/llap-server/src/java/org/apache/hadoop/hive/llap/cache/LowLevelLrfuCachePolicy.java:
 does not exist in index
error: 
a/llap-server/src/java/org/apache/hadoop/hive/llap/daemon/impl/LlapTaskReporter.java:
 does not exist in index
error: 
a/llap-tez/src/java/org/apache/hadoop/hive/llap/tezplugins/LlapTaskSchedulerService.java:
 does not exist in index
error: 
a/metastore/src/java/org/apache/hadoop/hive/metastore/HiveClientCache.java: 
does not exist in index
error: 
a/metastore/src/java/org/apache/hadoop/hive/metastore/SerDeStorageSchemaReader.java:
 does not exist in index
error: a/pom.xml: does not exist in index
error: a/ql/pom.xml: does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/Context.java: does not exist in 
index
error: a/ql/src/java/org/apache/hadoop/hive/ql/ddl/DDLUtils.java: does not 
exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/database/alter/location/AlterDatabaseSetLocationOperation.java:
 does not exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/function/desc/DescFunctionOperation.java:
 does not exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/AbstractAlterTableOperation.java:
 does not exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/create/show/ShowCreateTableOperation.java:
 does not exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/info/DescTableOperation.java: 
does not exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/ddl/table/misc/AlterTableSetPropertiesOperation.java:
 does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/debug/Utils.java: does not exist 
in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/FetchOperator.java: does 
not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/MapJoinOperator.java: does 
not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/OrcFileMergeOperator.java: 
does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java: does not 
exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecDriver.java: does 
not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapRedTask.java: does 
not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.java: 
does not exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/persistence/BytesBytesMultiHashMap.java:
 does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplLoadTask.java: 
does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/repl/ReplLoadWork.java: 
does not exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/repl/bootstrap/events/filesystem/BootstrapEventsIterator.java:
 does not exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/repl/bootstrap/events/filesystem/DatabaseEventsIterator.java:
 does not exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/repl/bootstrap/events/filesystem/FSDatabaseEvent.java:
 does not exist in index
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/exec/repl/bootstrap/events/filesystem/FSTableEvent.java:
 does not exist in index
error: 

[jira] [Updated] (HIVE-22595) Dynamic partition inserts fail on Avro table table with external schema

2020-01-10 Thread Jason Dere (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-22595:
--
Fix Version/s: 4.0.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to master

> Dynamic partition inserts fail on Avro table table with external schema
> ---
>
> Key: HIVE-22595
> URL: https://issues.apache.org/jira/browse/HIVE-22595
> Project: Hive
>  Issue Type: Bug
>  Components: Avro, Serializers/Deserializers
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22595.1.patch, HIVE-22595.2.patch, 
> HIVE-22595.3.patch
>
>
> Example qfile test:
> {noformat}
> create external table avro_extschema_insert1 (name string) partitioned by (p1 
> string)
>   stored as avro tblproperties 
> ('avro.schema.url'='${system:test.tmp.dir}/table1.avsc');
> create external table avro_extschema_insert2 like avro_extschema_insert1;
> insert overwrite table avro_extschema_insert1 partition (p1='part1') values 
> ('col1_value', 1, 'col3_value');
> insert overwrite table avro_extschema_insert2 partition (p1) select * from 
> avro_extschema_insert1;
> {noformat}
> The last statement fails with the following error:
> {noformat}
> ], TaskAttempt 3 failed, info=[Error: Error while running task ( failure ) : 
> attempt_1575484789169_0003_4_00_00_3:java.lang.RuntimeException: 
> java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: 
> Hive Runtime Error while processing row
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250)
>   at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
>   at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
>   at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
>   at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)
>   at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:101)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:76)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.run(MapRecordProcessor.java:426)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:267)
>   ... 16 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:576)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.processRow(MapRecordSource.java:92)
>   ... 19 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> org.apache.hadoop.hive.serde2.avro.AvroSerdeException: Number of input 
> columns was different than output columns (in = 2 vs out = 1
>   at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:1047)
>   at 
> org.apache.hadoop.hive.ql.exec.Operator.baseForward(Operator.java:994)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:940)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:927)
>   at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95)
>   at 
> 

[jira] [Commented] (HIVE-22716) Reading to ByteBuffer is broken in ParquetFooterInputFromCache

2020-01-10 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17013102#comment-17013102
 ] 

Hive QA commented on HIVE-22716:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
2s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20146/dev-support/hive-personality.sh
 |
| git revision | master / dfcb0a4 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20146/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Reading to ByteBuffer is broken in ParquetFooterInputFromCache
> --
>
> Key: HIVE-22716
> URL: https://issues.apache.org/jira/browse/HIVE-22716
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-22716.1.patch, HIVE-22716.2.patch
>
>
> The ParquetFooterInputFromCache.read(ByteBuffer bb) calls the readInternal 
> method with the result parameter passed as 'len'. The value of the result 
> parameter will always be -1 at this point, and because of this, the 
> readInternal method won't read anything.
> {noformat}
>   public int read(ByteBuffer bb) throws IOException {
> // Simple implementation for now - currently Parquet uses heap buffers.
> int result = -1;
> if (bb.hasArray()) {
>   result = readInternal(bb.array(), bb.arrayOffset(), result);  // The 
> readInternal is called with result=-1
>   if (result > 0) {
> bb.position(bb.position() + result);
>   }
> } else {
>   byte[] b = new byte[bb.remaining()];
>   result = readInternal(b, 0, result); // The readInternal is called 
> with result=-1
>   bb.put(b, 0, result);
> }
> return result;
>   }
> {noformat}
> {noformat}
>   public int readInternal(byte[] b, int offset, int len) {
> if (position >= length) return -1;
> int argPos = offset, argEnd = offset + len;  // Here argEnd will be -1
> while (argPos < argEnd) { // This condition will never be 
> true, since argEnd=-1
>

[jira] [Commented] (HIVE-22713) Constant propagation shouldn't be done for Join-Fil(*)-RS structure

2020-01-10 Thread Ramesh Kumar Thangarajan (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17013106#comment-17013106
 ] 

Ramesh Kumar Thangarajan commented on HIVE-22713:
-

Hi [~jcamachorodriguez] Can you please help me by reviewing the patch?

> Constant propagation shouldn't be done for Join-Fil(*)-RS structure
> ---
>
> Key: HIVE-22713
> URL: https://issues.apache.org/jira/browse/HIVE-22713
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-22713.1.patch, HIVE-22713.2.patch
>
>
> Constant propagation shouldn't be done for Join-Fil(*)-RS structure too. 
> Since we output columns from the join if the structure is Join-Fil(*)-RS, the 
> expressions shouldn't be modified.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22510) Support decimal64 operations for column operands with different scales

2020-01-10 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-22510:

Attachment: HIVE-22510.19.patch
Status: Patch Available  (was: Open)

> Support decimal64 operations for column operands with different scales
> --
>
> Key: HIVE-22510
> URL: https://issues.apache.org/jira/browse/HIVE-22510
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22510.11.patch, HIVE-22510.13.patch, 
> HIVE-22510.14.patch, HIVE-22510.15.patch, HIVE-22510.16.patch, 
> HIVE-22510.17.patch, HIVE-22510.18.patch, HIVE-22510.19.patch, 
> HIVE-22510.2.patch, HIVE-22510.3.patch, HIVE-22510.4.patch, 
> HIVE-22510.5.patch, HIVE-22510.7.patch, HIVE-22510.9.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Right now, if the operands on the decimal64 operations are columns with 
> different scales, then we do not use the decimal64 vectorized version and 
> fall back to HiveDecimal vectorized version of the operator. In this Jira, we 
> will check if we can use decimal64 vectorized version, even if the scales are 
> different.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22261) Add tests for materialized view rewriting with window functions

2020-01-10 Thread Zoltan Haindrich (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012544#comment-17012544
 ] 

Zoltan Haindrich commented on HIVE-22261:
-

+1

> Add tests for materialized view rewriting with window functions
> ---
>
> Key: HIVE-22261
> URL: https://issues.apache.org/jira/browse/HIVE-22261
> Project: Hive
>  Issue Type: Sub-task
>  Components: CBO, Materialized views, Tests
>Affects Versions: 3.1.2
>Reporter: Steve Carlin
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-22261.patch, af2.sql
>
>
> Materialized views don't support window functions.  At a minimum, we should 
> print a friendlier message when the rewrite fails (it can still be created 
> with a "disable rewrite")
> Script is attached
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22510) Support decimal64 operations for column operands with different scales

2020-01-10 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-22510:

Status: Open  (was: Patch Available)

> Support decimal64 operations for column operands with different scales
> --
>
> Key: HIVE-22510
> URL: https://issues.apache.org/jira/browse/HIVE-22510
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22510.11.patch, HIVE-22510.13.patch, 
> HIVE-22510.14.patch, HIVE-22510.15.patch, HIVE-22510.16.patch, 
> HIVE-22510.17.patch, HIVE-22510.18.patch, HIVE-22510.19.patch, 
> HIVE-22510.2.patch, HIVE-22510.3.patch, HIVE-22510.4.patch, 
> HIVE-22510.5.patch, HIVE-22510.7.patch, HIVE-22510.9.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Right now, if the operands on the decimal64 operations are columns with 
> different scales, then we do not use the decimal64 vectorized version and 
> fall back to HiveDecimal vectorized version of the operator. In this Jira, we 
> will check if we can use decimal64 vectorized version, even if the scales are 
> different.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22713) Constant propagation shouldn't be done for Join-Fil(*)-RS structure

2020-01-10 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-22713:

Status: Open  (was: Patch Available)

> Constant propagation shouldn't be done for Join-Fil(*)-RS structure
> ---
>
> Key: HIVE-22713
> URL: https://issues.apache.org/jira/browse/HIVE-22713
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-22713.1.patch, HIVE-22713.2.patch
>
>
> Constant propagation shouldn't be done for Join-Fil(*)-RS structure too. 
> Since we output columns from the join if the structure is Join-Fil(*)-RS, the 
> expressions shouldn't be modified.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-22713) Constant propagation shouldn't be done for Join-Fil(*)-RS structure

2020-01-10 Thread Ramesh Kumar Thangarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ramesh Kumar Thangarajan updated HIVE-22713:

Attachment: HIVE-22713.2.patch
Status: Patch Available  (was: Open)

> Constant propagation shouldn't be done for Join-Fil(*)-RS structure
> ---
>
> Key: HIVE-22713
> URL: https://issues.apache.org/jira/browse/HIVE-22713
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
> Attachments: HIVE-22713.1.patch, HIVE-22713.2.patch
>
>
> Constant propagation shouldn't be done for Join-Fil(*)-RS structure too. 
> Since we output columns from the join if the structure is Join-Fil(*)-RS, the 
> expressions shouldn't be modified.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-20934) ACID: Query based compactor for minor compaction

2020-01-10 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-20934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012573#comment-17012573
 ] 

Hive QA commented on HIVE-20934:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12990483/HIVE-20934.13.patch

{color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 17871 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[timestamptz_2] 
(batchId=90)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/20139/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/20139/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-20139/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12990483 - PreCommit-HIVE-Build

> ACID: Query based compactor for minor compaction
> 
>
> Key: HIVE-20934
> URL: https://issues.apache.org/jira/browse/HIVE-20934
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Vaibhav Gumashta
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-20934.01.patch, HIVE-20934.02.patch, 
> HIVE-20934.03.patch, HIVE-20934.04.patch, HIVE-20934.05.patch, 
> HIVE-20934.06.patch, HIVE-20934.07.patch, HIVE-20934.08.patch, 
> HIVE-20934.09.patch, HIVE-20934.10.patch, HIVE-20934.11.patch, 
> HIVE-20934.12.patch, HIVE-20934.13.patch
>
>
> Follow up of HIVE-20699. This is to enable running minor compactions as a 
> HiveQL query 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HIVE-20934) ACID: Query based compactor for minor compaction

2020-01-10 Thread Laszlo Pinter (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-20934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Pinter updated HIVE-20934:
-
Attachment: HIVE-20934.14.patch

> ACID: Query based compactor for minor compaction
> 
>
> Key: HIVE-20934
> URL: https://issues.apache.org/jira/browse/HIVE-20934
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Affects Versions: 3.1.1
>Reporter: Vaibhav Gumashta
>Assignee: Laszlo Pinter
>Priority: Major
> Attachments: HIVE-20934.01.patch, HIVE-20934.02.patch, 
> HIVE-20934.03.patch, HIVE-20934.04.patch, HIVE-20934.05.patch, 
> HIVE-20934.06.patch, HIVE-20934.07.patch, HIVE-20934.08.patch, 
> HIVE-20934.09.patch, HIVE-20934.10.patch, HIVE-20934.11.patch, 
> HIVE-20934.12.patch, HIVE-20934.13.patch, HIVE-20934.14.patch
>
>
> Follow up of HIVE-20699. This is to enable running minor compactions as a 
> HiveQL query 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-22510) Support decimal64 operations for column operands with different scales

2020-01-10 Thread Hive QA (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-22510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17012604#comment-17012604
 ] 

Hive QA commented on HIVE-22510:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
58s{color} | {color:blue} ql in master has 1531 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
47s{color} | {color:red} ql: The patch generated 3 new + 780 unchanged - 0 
fixed = 783 total (was 780) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 26m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-20140/dev-support/hive-personality.sh
 |
| git revision | master / f8e583f |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.1 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20140/yetus/diff-checkstyle-ql.txt
 |
| modules | C: vector-code-gen ql itests U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-20140/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Support decimal64 operations for column operands with different scales
> --
>
> Key: HIVE-22510
> URL: https://issues.apache.org/jira/browse/HIVE-22510
> Project: Hive
>  Issue Type: Bug
>Reporter: Ramesh Kumar Thangarajan
>Assignee: Ramesh Kumar Thangarajan
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-22510.11.patch, HIVE-22510.13.patch, 
> HIVE-22510.14.patch, HIVE-22510.15.patch, HIVE-22510.16.patch, 
> HIVE-22510.17.patch, HIVE-22510.18.patch, HIVE-22510.19.patch, 
> HIVE-22510.2.patch, HIVE-22510.3.patch, HIVE-22510.4.patch, 
> HIVE-22510.5.patch, HIVE-22510.7.patch, HIVE-22510.9.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Right now, if the operands on the decimal64 operations are columns with 
> different scales, then we do not use the decimal64 vectorized version and 
> fall back to HiveDecimal vectorized version of the operator. In this Jira, we 
> will check if we can use decimal64 vectorized version, even if the scales are 
> different.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HIVE-22716) Reading to ByteBuffer is broken in ParquetFooterInputFromCache

2020-01-10 Thread Marta Kuczora (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-22716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marta Kuczora reassigned HIVE-22716:



> Reading to ByteBuffer is broken in ParquetFooterInputFromCache
> --
>
> Key: HIVE-22716
> URL: https://issues.apache.org/jira/browse/HIVE-22716
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 4.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)