[jira] [Commented] (HIVE-18003) add explicit jdbc connection string args for mappings

2017-12-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288845#comment-16288845
 ] 

Hive QA commented on HIVE-18003:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12901746/HIVE-18153.04.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 16 failed/errored test(s), 11130 tests 
executed
*Failed tests:*
{noformat}
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=93)


[jira] [Commented] (HIVE-18148) NPE in SparkDynamicPartitionPruningResolver

2017-12-12 Thread liyunzhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288814#comment-16288814
 ] 

liyunzhang commented on HIVE-18148:
---

[~lirui]:  tried to use the example you provided.
{code}

set hive.spark.dynamic.partition.pruning=true;
explain select * from src join part1 on src.key=part1.p join part2 on 
src.value=part2.q;

{code}

But  in my env( latest build: 095e6bf8988a03875bc9340b2ab82d5d13c4cb3c), the 
physical plan  before SparkCompiler#removeNestedDPP is 
{code}
TS[0]-FIL[22]-SEL[2]-RS[9]-MAPJOIN[32]-MAPJOIN[31]-SEL[15]-FS[16]
TS[3]-FIL[23]-SEL[5]-MAPJOIN[32]
TS[6]-FIL[24]-SEL[8]-RS[13]-MAPJOIN[31]
{code}
there is no dpp operator for removeNestedDPP to traverse because HIVE-17087. 
SparkMapJoinOptimizer#convertJoinMapJoin removes dpp operator when there is 
mapjoin operator.

So did you reproduce the NPE when setting  
{{hive.auto.convert.join.noconditionaltask}} as false?  BTW,  how does Hive on 
Tez deal with this kind of nested dpp case?


> NPE in SparkDynamicPartitionPruningResolver
> ---
>
> Key: HIVE-18148
> URL: https://issues.apache.org/jira/browse/HIVE-18148
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HIVE-18148.1.patch
>
>
> The stack trace is:
> {noformat}
> 2017-11-27T10:32:38,752 ERROR [e6c8aab5-ddd2-461d-b185-a7597c3e7519 main] 
> ql.Driver: FAILED: NullPointerException null
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.ql.optimizer.physical.SparkDynamicPartitionPruningResolver$SparkDynamicPartitionPruningDispatcher.dispatch(SparkDynamicPartitionPruningResolver.java:100)
> at 
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.dispatch(TaskGraphWalker.java:111)
> at 
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.walk(TaskGraphWalker.java:180)
> at 
> org.apache.hadoop.hive.ql.lib.TaskGraphWalker.startWalking(TaskGraphWalker.java:125)
> at 
> org.apache.hadoop.hive.ql.optimizer.physical.SparkDynamicPartitionPruningResolver.resolve(SparkDynamicPartitionPruningResolver.java:74)
> at 
> org.apache.hadoop.hive.ql.parse.spark.SparkCompiler.optimizeTaskPlan(SparkCompiler.java:568)
> {noformat}
> At this stage, there shouldn't be a DPP sink whose target map work is null. 
> The root cause seems to be a malformed operator tree generated by 
> SplitOpTreeForDPP.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18003) add explicit jdbc connection string args for mappings

2017-12-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288802#comment-16288802
 ] 

Hive QA commented on HIVE-18003:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
40s{color} | {color:red} ql: The patch generated 37 new + 480 unchanged - 35 
fixed = 517 total (was 515) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 095e6bf |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8209/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8209/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> add explicit jdbc connection string args for mappings
> -
>
> Key: HIVE-18003
> URL: https://issues.apache.org/jira/browse/HIVE-18003
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18003.01.patch, HIVE-18003.02.patch, 
> HIVE-18003.03.patch, HIVE-18003.patch, HIVE-18153.04.patch
>
>
> 1) Force using unmanaged/containers execution.
> 2) Optional - specify pool name (config setting to gate this, disabled by 
> default?).
> In phase 2 (or 4?) we might allow #2 to be used by a user to choose between 
> multiple mappings if they have multiple pools they could be mapped to (i.e. 
> to change the ordering essentially). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18112) show create for view having special char in where clause is not showing properly

2017-12-12 Thread Naresh P R (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288795#comment-16288795
 ] 

Naresh P R commented on HIVE-18112:
---

Thanks for the update [~owen.omalley]. I had attached new patch with the 
suggested changes.

> show create for view having special char in where clause is not showing 
> properly
> 
>
> Key: HIVE-18112
> URL: https://issues.apache.org/jira/browse/HIVE-18112
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Naresh P R
>Assignee: Naresh P R
>Priority: Minor
> Fix For: 2.2.0
>
> Attachments: HIVE-18112-branch-2.2.patch, 
> HIVE-18112.1-branch-2.2.patch, HIVE-18112.2-branch-2.2.patch
>
>
> e.g., 
> CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` where 
> `evil_byte1`.`a` = 'abcÖdefÖgh';
> Output:
> ==
> 0: jdbc:hive2://172.26.122.227:1> show create table v2;
> ++--+
> | createtab_stmt  
>|
> ++--+
> | CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` 
> where `evil_byte1`.`a` = 'abc�def�gh'  |
> ++--+
> Only show create output is having invalid characters, actual source table 
> content is displayed properly in the console.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18112) show create for view having special char in where clause is not showing properly

2017-12-12 Thread Naresh P R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naresh P R updated HIVE-18112:
--
Attachment: HIVE-18112.2-branch-2.2.patch

> show create for view having special char in where clause is not showing 
> properly
> 
>
> Key: HIVE-18112
> URL: https://issues.apache.org/jira/browse/HIVE-18112
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Naresh P R
>Assignee: Naresh P R
>Priority: Minor
> Fix For: 2.2.0
>
> Attachments: HIVE-18112-branch-2.2.patch, 
> HIVE-18112.1-branch-2.2.patch, HIVE-18112.2-branch-2.2.patch
>
>
> e.g., 
> CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` where 
> `evil_byte1`.`a` = 'abcÖdefÖgh';
> Output:
> ==
> 0: jdbc:hive2://172.26.122.227:1> show create table v2;
> ++--+
> | createtab_stmt  
>|
> ++--+
> | CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` 
> where `evil_byte1`.`a` = 'abc�def�gh'  |
> ++--+
> Only show create output is having invalid characters, actual source table 
> content is displayed properly in the console.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18078) WM getSession needs some retry logic

2017-12-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288785#comment-16288785
 ] 

Hive QA commented on HIVE-18078:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12901747/HIVE-18078.03.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8208/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8208/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8208/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2017-12-13 06:31:12.623
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-8208/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2017-12-13 06:31:12.626
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 095e6bf HIVE-18068: Upgrade to Calcite 1.15 (Jesus Camacho 
Rodriguez, reviewed by Ashutosh Chauhan)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 095e6bf HIVE-18068: Upgrade to Calcite 1.15 (Jesus Camacho 
Rodriguez, reviewed by Ashutosh Chauhan)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2017-12-13 06:31:17.522
+ rm -rf ../yetus
+ mkdir ../yetus
+ cp -R . ../yetus
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-8208/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionState.java:37
Falling back to three-way merge...
Applied patch to 
'ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionState.java' with 
conflicts.
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezTask.java:44
Falling back to three-way merge...
Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezTask.java' 
with conflicts.
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java:17
Falling back to three-way merge...
Applied patch to 
'ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java' with 
conflicts.
error: patch failed: 
ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestWorkloadManager.java:272
Falling back to three-way merge...
Applied patch to 
'ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestWorkloadManager.java' 
cleanly.
Going to apply patch with: git apply -p0
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionState.java:37
Falling back to three-way merge...
Applied patch to 
'ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionState.java' with 
conflicts.
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezTask.java:44
Falling back to three-way merge...
Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezTask.java' 
with conflicts.
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java:17
Falling back to three-way merge...
Applied patch to 
'ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java' with 
conflicts.
error: patch failed: 
ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestWorkloadManager.java:272
Falling back to three-way merge...
Applied patch to 
'ql/src/test/org/apache/hadoop/hive/ql/exec/tez/TestWorkloadManager.java' 
cleanly.
U ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionState.java
U ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezTask.java
U ql/src/java/org/apache/hadoop/hive/ql/exec/tez/WorkloadManager.java
+ exit 1
'
{noformat}

This message 

[jira] [Commented] (HIVE-18153) refactor reopen and file management in TezTask

2017-12-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288780#comment-16288780
 ] 

Hive QA commented on HIVE-18153:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12901750/HIVE-18153.05.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 18 failed/errored test(s), 11526 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] 
(batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2]
 (batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[quotedid_smb]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=160)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_1]
 (batchId=93)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_10]
 (batchId=138)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketsortoptimize_insert_7]
 (batchId=128)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=120)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=113)
org.apache.hadoop.hive.ql.exec.tez.TestWorkloadManager.testQueueing 
(batchId=285)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=226)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8207/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8207/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8207/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 18 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12901750 - PreCommit-HIVE-Build

> refactor reopen and file management in TezTask
> --
>
> Key: HIVE-18153
> URL: https://issues.apache.org/jira/browse/HIVE-18153
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18153.01.patch, HIVE-18153.02.patch, 
> HIVE-18153.03.patch, HIVE-18153.04.patch, HIVE-18153.05.patch, 
> HIVE-18153.patch
>
>
> TezTask reopen relies on getting the same session object in terms of setup; 
> WM reopen returns a new session from the pool. 
> The former has the advantage of not having to reupload files and stuff... but 
> the object reuse results in a lot of ugly code, and also reopen might be 
> slower on average with the session pool than just getting a session from the 
> pool. Either WM needs to do the object-preserving reopen, or TezTask needs to 
> be refactored. It looks like DAG would have to be rebuilt to do the latter 
> because of some paths tied to a directory of the old session. Let me see if I 
> can get around that; if not we can do the former; and then if the former 
> results in too much ugly code in WM to account for object reuse for different 
> Tez client I'd do the latter anyway since it's a failure path :)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18112) show create for view having special char in where clause is not showing properly

2017-12-12 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288759#comment-16288759
 ] 

Owen O'Malley commented on HIVE-18112:
--

This looks fine. In general, I prefer to use StandardCharsets.UTF_8 rather than 
the string "UTF-8", but the patch looks good.

> show create for view having special char in where clause is not showing 
> properly
> 
>
> Key: HIVE-18112
> URL: https://issues.apache.org/jira/browse/HIVE-18112
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Naresh P R
>Assignee: Naresh P R
>Priority: Minor
> Fix For: 2.2.0
>
> Attachments: HIVE-18112-branch-2.2.patch, 
> HIVE-18112.1-branch-2.2.patch
>
>
> e.g., 
> CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` where 
> `evil_byte1`.`a` = 'abcÖdefÖgh';
> Output:
> ==
> 0: jdbc:hive2://172.26.122.227:1> show create table v2;
> ++--+
> | createtab_stmt  
>|
> ++--+
> | CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` 
> where `evil_byte1`.`a` = 'abc�def�gh'  |
> ++--+
> Only show create output is having invalid characters, actual source table 
> content is displayed properly in the console.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18153) refactor reopen and file management in TezTask

2017-12-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288744#comment-16288744
 ] 

Hive QA commented on HIVE-18153:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
38s{color} | {color:red} ql: The patch generated 37 new + 480 unchanged - 35 
fixed = 517 total (was 515) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 095e6bf |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8207/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8207/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> refactor reopen and file management in TezTask
> --
>
> Key: HIVE-18153
> URL: https://issues.apache.org/jira/browse/HIVE-18153
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18153.01.patch, HIVE-18153.02.patch, 
> HIVE-18153.03.patch, HIVE-18153.04.patch, HIVE-18153.05.patch, 
> HIVE-18153.patch
>
>
> TezTask reopen relies on getting the same session object in terms of setup; 
> WM reopen returns a new session from the pool. 
> The former has the advantage of not having to reupload files and stuff... but 
> the object reuse results in a lot of ugly code, and also reopen might be 
> slower on average with the session pool than just getting a session from the 
> pool. Either WM needs to do the object-preserving reopen, or TezTask needs to 
> be refactored. It looks like DAG would have to be rebuilt to do the latter 
> because of some paths tied to a directory of the old session. Let me see if I 
> can get around that; if not we can do the former; and then if the former 
> results in too much ugly code in WM to account for object reuse for different 
> Tez client I'd do the latter anyway since it's a failure path :)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18203) change the way WM is enabled and allow dropping the last resource plan

2017-12-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288730#comment-16288730
 ] 

Hive QA commented on HIVE-18203:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12901751/HIVE-18203.03.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 20 failed/errored test(s), 11529 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[explainuser_2] 
(batchId=150)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2]
 (batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[quotedid_smb]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=160)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=93)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_10]
 (batchId=138)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketsortoptimize_insert_7]
 (batchId=128)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=120)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=113)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=226)
org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=232)
org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=232)
org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=232)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8206/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8206/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8206/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 20 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12901751 - PreCommit-HIVE-Build

> change the way WM is enabled and allow dropping the last resource plan
> --
>
> Key: HIVE-18203
> URL: https://issues.apache.org/jira/browse/HIVE-18203
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18203.01.patch, HIVE-18203.02.patch, 
> HIVE-18203.03.patch, HIVE-18203.patch
>
>
> Currently it's impossible to drop the last active resource plan even if WM is 
> disabled. It should be possible to deactivate the last resource plan AND 
> disable WM in the same action. Activating a resource plan should enable WM in 
> this case.
> This should interact with the WM queue config in a sensible manner.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18221) test acid default

2017-12-12 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18221:
--
Attachment: HIVE-18221.10.patch

> test acid default
> -
>
> Key: HIVE-18221
> URL: https://issues.apache.org/jira/browse/HIVE-18221
> Project: Hive
>  Issue Type: Test
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-18221.01.patch, HIVE-18221.02.patch, 
> HIVE-18221.03.patch, HIVE-18221.04.patch, HIVE-18221.07.patch, 
> HIVE-18221.08.patch, HIVE-18221.09.patch, HIVE-18221.10.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18125) Support arbitrary file names in input to Load Data

2017-12-12 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18125:
--
Status: Patch Available  (was: Open)

> Support arbitrary file names in input to Load Data
> --
>
> Key: HIVE-18125
> URL: https://issues.apache.org/jira/browse/HIVE-18125
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-18125.01.patch
>
>
> HIVE-17361 only allows 0_0 and _0_copy_1.  Should it support 
> arbitrary names?
> If so, should it sort them and rename _0, 0001_0, etc?
> This is probably a lot easier than changing the whole code base to assign 
> proper 'bucket' (writerId) everywhere Acid reads such file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18125) Support arbitrary file names in input to Load Data

2017-12-12 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18125:
--
Attachment: HIVE-18125.01.patch

> Support arbitrary file names in input to Load Data
> --
>
> Key: HIVE-18125
> URL: https://issues.apache.org/jira/browse/HIVE-18125
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-18125.01.patch
>
>
> HIVE-17361 only allows 0_0 and _0_copy_1.  Should it support 
> arbitrary names?
> If so, should it sort them and rename _0, 0001_0, etc?
> This is probably a lot easier than changing the whole code base to assign 
> proper 'bucket' (writerId) everywhere Acid reads such file.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18203) change the way WM is enabled and allow dropping the last resource plan

2017-12-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288699#comment-16288699
 ] 

Hive QA commented on HIVE-18203:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
18s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
31s{color} | {color:red} standalone-metastore: The patch generated 6 new + 2375 
unchanged - 3 fixed = 2381 total (was 2378) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
41s{color} | {color:red} ql: The patch generated 4 new + 1074 unchanged - 2 
fixed = 1078 total (was 1076) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} service: The patch generated 1 new + 31 unchanged - 2 
fixed = 32 total (was 33) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git 
apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
19s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 095e6bf |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8206/yetus/diff-checkstyle-standalone-metastore.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8206/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8206/yetus/diff-checkstyle-service.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8206/yetus/whitespace-eol.txt 
|
| modules | C: standalone-metastore ql service itests/hcatalog-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8206/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> change the way WM is enabled and allow dropping the last resource plan
> --
>
> Key: HIVE-18203
> URL: https://issues.apache.org/jira/browse/HIVE-18203
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18203.01.patch, HIVE-18203.02.patch, 
> HIVE-18203.03.patch, HIVE-18203.patch
>
>
> Currently it's impossible to drop the last active resource plan even if WM is 
> disabled. 

[jira] [Commented] (HIVE-17495) CachedStore: prewarm improvement (avoid multiple sql calls to read partition column stats), refactoring and caching some aggregate stats

2017-12-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288681#comment-16288681
 ] 

Hive QA commented on HIVE-17495:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12901740/HIVE-17495.6.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 33 failed/errored test(s), 11528 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[columnstats_partlvl] 
(batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[extrapolate_part_stats_partial]
 (batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[tunable_ndv] (batchId=45)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2]
 (batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_partitioned]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[multiMapJoin2]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[quotedid_smb]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_count_distinct]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_partition_diff_num_cols]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_partitioned_date_time]
 (batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorization_part_project]
 (batchId=156)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[vector_join_part_col_char]
 (batchId=102)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=93)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_10]
 (batchId=138)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketsortoptimize_insert_7]
 (batchId=128)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=120)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=113)
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testPartition 
(batchId=214)
org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore.testPartition 
(batchId=216)
org.apache.hadoop.hive.metastore.TestSetUGIOnBothClientServer.testPartition 
(batchId=212)
org.apache.hadoop.hive.metastore.TestSetUGIOnOnlyClient.testPartition 
(batchId=211)
org.apache.hadoop.hive.metastore.TestSetUGIOnOnlyServer.testPartition 
(batchId=221)
org.apache.hadoop.hive.metastore.cache.TestCachedStore.testDatabaseOps 
(batchId=202)
org.apache.hadoop.hive.metastore.cache.TestCachedStore.testPartitionOps 
(batchId=202)
org.apache.hadoop.hive.metastore.cache.TestCachedStore.testTableOps 
(batchId=202)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=226)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8205/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8205/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8205/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 33 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12901740 - PreCommit-HIVE-Build

> CachedStore: prewarm improvement (avoid multiple sql calls to read partition 
> column stats), refactoring and caching some aggregate stats
> 
>
> Key: HIVE-17495
> URL: https://issues.apache.org/jira/browse/HIVE-17495
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: 

[jira] [Updated] (HIVE-18265) desc formatted/extended or show create table can not fully display the result when field or table comment contains tab character

2017-12-12 Thread Hui Huang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hui Huang updated HIVE-18265:
-
Status: Patch Available  (was: Open)

Hi, all~ 
I've tried the following methods:
1. Modify the HiveLexer.g and HiveParser.g, but failed
2. Replace tab character with a space character
3. Check the comment during semantic analyzing and throw semantic exception

At last, I took the third one. 

> desc formatted/extended or show create table can not fully display the result 
> when field or table comment contains tab character
> 
>
> Key: HIVE-18265
> URL: https://issues.apache.org/jira/browse/HIVE-18265
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 3.0.0
>Reporter: Hui Huang
>Assignee: Hui Huang
> Fix For: 3.0.0
>
> Attachments: HIVE-18265.patch
>
>
> Here are some examples:
> create table test_comment (id1 string comment 'full_\tname1', id2 string 
> comment 'full_\tname2', id3 string comment 'full_\tname3') stored as textfile;
> When execute `show create table test_comment`, we can see the following 
> content in the console,
> {quote}
> createtab_stmt
> CREATE TABLE `test_comment`(
>   `id1` string COMMENT 'full_
>   `id2` string COMMENT 'full_
>   `id3` string COMMENT 'full_
> ROW FORMAT SERDE
>   'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
> STORED AS INPUTFORMAT
>   'org.apache.hadoop.mapred.TextInputFormat'
> OUTPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
> LOCATION
>   'hdfs://xxx/user/huanghui/warehouse/huanghuitest.db/test_comment'
> TBLPROPERTIES (
>   'transient_lastDdlTime'='1513095570')
> {quote}
> And the output of `desc formatted table ` is a little similar,
> {quote}
> col_name  data_type   comment
> \# col_name   data_type   comment
> id1   string  full_
> id2   string  full_
> id3   string  full_
> \# Detailed Table Information
> (ignore)...
> {quote}
> When execute `desc extended test_comment`, the problem is more obvious,
> {quote}
> col_name  data_type   comment
> id1   string  full_
> id2   string  full_
> id3   string  full_
> Detailed Table InformationTable(tableName:test_comment, 
> dbName:huanghuitest, owner:huanghui, createTime:1513095570, lastAccessTime:0, 
> retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:id1, type:string, 
> comment:full_name1), FieldSchema(name:id2, type:string, comment:full_
> {quote}
> *the rest of the content is lost*.
> The content is not really lost, it's just can not display normal. Because 
> hive store the result in LazyStruct, and LazyStruct use '\t' as field 
> separator:
> {code:java}
> // LazyStruct.java#parse()
> // Go through all bytes in the byte[]
> while (fieldByteEnd <= structByteEnd) {
>   if (fieldByteEnd == structByteEnd || bytes[fieldByteEnd] == separator) {
> // Reached the end of a field?
> if (lastColumnTakesRest && fieldId == fields.length - 1) {
>   fieldByteEnd = structByteEnd;
> }
> startPosition[fieldId] = fieldByteBegin;
> fieldId++;
> if (fieldId == fields.length || fieldByteEnd == structByteEnd) {
>   // All fields have been parsed, or bytes have been parsed.
>   // We need to set the startPosition of fields.length to ensure we
>   // can use the same formula to calculate the length of each field.
>   // For missing fields, their starting positions will all be the 
> same,
>   // which will make their lengths to be -1 and uncheckedGetField will
>   // return these fields as NULLs.
>   for (int i = fieldId; i <= fields.length; i++) {
> startPosition[i] = fieldByteEnd + 1;
>   }
>   break;
> }
> fieldByteBegin = fieldByteEnd + 1;
> fieldByteEnd++;
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18111) Fix temp path for Spark DPP sink

2017-12-12 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288654#comment-16288654
 ] 

Rui Li commented on HIVE-18111:
---

Hi [~stakiar], the test failures are not related.
To clarify, in latest patch each DPP sink outputs to 
{{QUERY_TMP_PATH/dpp_output/uniqueId}}. And the unique ID is used as the event 
source key in the event source maps of each MapWork. For example, if DPP1's 
targets are MapWork1 and MapWork2, DPP2's targets are MapWork2 and MapWork3. 
DPP1 outputs to {{QUERY_TMP_PATH/dpp_output/DPP1_uniqueId}} and DPP2 outputs to 
{{QUERY_TMP_PATH/dpp_output/DPP2_uniqueId}}. MapWork1 has DPP1_uniqueId in its 
event source map. MapWork2 has DPP1_uniqueId and DPP2_uniqueId in its event 
source map. MapWork3 has DPP2_uniqueId in its event source map. Therefore the 3 
MapWorks can find the outputs for them under {{QUERY_TMP_PATH/dpp_output}}. 
Does this make sense?

For the check style issue, I'm following the indentation in the context. It 
seems strange to have different indentations in the same file. Maybe it's 
better to fix such issues in separate JIRAs?

> Fix temp path for Spark DPP sink
> 
>
> Key: HIVE-18111
> URL: https://issues.apache.org/jira/browse/HIVE-18111
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HIVE-18111.1.patch, HIVE-18111.2.patch, 
> HIVE-18111.3.patch, HIVE-18111.4.patch, HIVE-18111.5.patch, HIVE-18111.5.patch
>
>
> Before HIVE-17877, each DPP sink has only one target work. The output path of 
> a DPP work is {{TMP_PATH/targetWorkId/dppWorkId}}. When we do the pruning, 
> each map work reads DPP outputs under {{TMP_PATH/targetWorkId}}.
> After HIVE-17877, each DPP sink can have multiple target works. It's possible 
> that a map work needs to read DPP outputs from multiple 
> {{TMP_PATH/targetWorkId}}. To solve this, I think we can have a DPP output 
> path specific to each query, e.g. {{QUERY_TMP_PATH/dpp_output}}. Each DPP 
> work outputs to {{QUERY_TMP_PATH/dpp_output/dppWorkId}}. And each map work 
> reads from {{QUERY_TMP_PATH/dpp_output}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17495) CachedStore: prewarm improvement (avoid multiple sql calls to read partition column stats), refactoring and caching some aggregate stats

2017-12-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288652#comment-16288652
 ] 

Hive QA commented on HIVE-17495:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
23s{color} | {color:red} standalone-metastore: The patch generated 10 new + 
1123 unchanged - 31 fixed = 1133 total (was 1154) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 095e6bf |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8205/yetus/diff-checkstyle-standalone-metastore.txt
 |
| modules | C: standalone-metastore itests/hcatalog-unit itests/hive-unit U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8205/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> CachedStore: prewarm improvement (avoid multiple sql calls to read partition 
> column stats), refactoring and caching some aggregate stats
> 
>
> Key: HIVE-17495
> URL: https://issues.apache.org/jira/browse/HIVE-17495
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-17495.1.patch, HIVE-17495.2.patch, 
> HIVE-17495.3.patch, HIVE-17495.4.patch, HIVE-17495.5.patch, HIVE-17495.6.patch
>
>
> 1. One sql call to retrieve column stats objects for a db
> 2. Cache some aggregate stats for speedup



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-12719) As a hive user, I am facing issues using permanent UDAF's.

2017-12-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288640#comment-16288640
 ] 

Hive QA commented on HIVE-12719:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12901512/HIVE-12719.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 17 failed/errored test(s), 11528 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2]
 (batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[quotedid_smb]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=160)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat]
 (batchId=178)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_1]
 (batchId=93)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_10]
 (batchId=138)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketsortoptimize_insert_7]
 (batchId=128)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=120)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=113)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=226)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8204/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8204/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8204/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 17 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12901512 - PreCommit-HIVE-Build

> As a hive user, I am facing issues using permanent UDAF's.
> --
>
> Key: HIVE-12719
> URL: https://issues.apache.org/jira/browse/HIVE-12719
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.2.1
>Reporter: Surbhit
>Assignee: Ganesha Shreedhara
> Attachments: HIVE-12719.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-12719) As a hive user, I am facing issues using permanent UDAF's.

2017-12-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-12719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288604#comment-16288604
 ] 

Hive QA commented on HIVE-12719:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 095e6bf |
| Default Java | 1.8.0_111 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8204/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> As a hive user, I am facing issues using permanent UDAF's.
> --
>
> Key: HIVE-12719
> URL: https://issues.apache.org/jira/browse/HIVE-12719
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.2.1
>Reporter: Surbhit
>Assignee: Ganesha Shreedhara
> Attachments: HIVE-12719.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17794) HCatLoader breaks when a member is added to a struct-column of a table

2017-12-12 Thread Mithun Radhakrishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mithun Radhakrishnan updated HIVE-17794:

Attachment: HIVE-17794.03.patch

Added missing Apache license to {{MiniGenericCluster.java}}.

> HCatLoader breaks when a member is added to a struct-column of a table
> --
>
> Key: HIVE-17794
> URL: https://issues.apache.org/jira/browse/HIVE-17794
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 2.2.0, 3.0.0, 2.4.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Attachments: HIVE-17794.02.patch, HIVE-17794.03.patch, 
> HIVE-17794.1.patch
>
>
> When a table's schema evolves to add a new member to a struct column, Hive 
> queries work fine, but {{HCatLoader}} breaks with the following trace:
> {noformat}
> TaskAttempt 1 failed, info=
>  Error: Failure while running 
> task:org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: kite_composites_with_segments: Local Rearrange
>  tuple
> {chararray}(false) - scope-555-> scope-974 Operator Key: scope-555): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup: New For Each(false,false)
>  bag
> - scope-548 Operator Key: scope-548): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLocalRearrange.getNextTuple(POLocalRearrange.java:287)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.plan.operator.POLocalRearrangeTez.getNextTuple(POLocalRearrangeTez.java:127)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.runPipeline(PigProcessor.java:376)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.run(PigProcessor.java:241)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:362)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:179)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:171)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1679)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:171)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:167)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> Exception while executing (Name: gup: New For Each(false,false)
>  bag
> - scope-548 Operator Key: scope-548): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNextTuple(POForEach.java:252)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:305)
> ... 17 more
> Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> Exception while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> 

[jira] [Commented] (HIVE-17794) HCatLoader breaks when a member is added to a struct-column of a table

2017-12-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288587#comment-16288587
 ] 

Hive QA commented on HIVE-17794:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12901734/HIVE-17794.02.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 18 failed/errored test(s), 11529 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] 
(batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2]
 (batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] 
(batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[quotedid_smb]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=160)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=93)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_10]
 (batchId=138)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketsortoptimize_insert_7]
 (batchId=128)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=120)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=113)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=227)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8203/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8203/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8203/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 18 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12901734 - PreCommit-HIVE-Build

> HCatLoader breaks when a member is added to a struct-column of a table
> --
>
> Key: HIVE-17794
> URL: https://issues.apache.org/jira/browse/HIVE-17794
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 2.2.0, 3.0.0, 2.4.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Attachments: HIVE-17794.02.patch, HIVE-17794.03.patch, 
> HIVE-17794.1.patch
>
>
> When a table's schema evolves to add a new member to a struct column, Hive 
> queries work fine, but {{HCatLoader}} breaks with the following trace:
> {noformat}
> TaskAttempt 1 failed, info=
>  Error: Failure while running 
> task:org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: kite_composites_with_segments: Local Rearrange
>  tuple
> {chararray}(false) - scope-555-> scope-974 Operator Key: scope-555): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup: New For Each(false,false)
>  bag
> - scope-548 Operator Key: scope-548): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLocalRearrange.getNextTuple(POLocalRearrange.java:287)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.plan.operator.POLocalRearrangeTez.getNextTuple(POLocalRearrangeTez.java:127)
> at 
> 

[jira] [Updated] (HIVE-18209) Fix API call in VectorizedListColumnReader to get value from BytesColumnVector

2017-12-12 Thread Colin Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Ma updated HIVE-18209:

Attachment: (was: HIVE-18209.003.patch)

> Fix API call in VectorizedListColumnReader to get value from BytesColumnVector
> --
>
> Key: HIVE-18209
> URL: https://issues.apache.org/jira/browse/HIVE-18209
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Colin Ma
>Assignee: Colin Ma
> Attachments: HIVE-18209.001.patch, HIVE-18209.002.patch
>
>
> With the API BytesColumnVector.setVal(), the isRepeating attribute can't be 
> set correctly if ListColumnVector.child is BytesColumnVector. 
> BytesColumnVector.setRef() should be used to avoid this problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18209) Fix API call in VectorizedListColumnReader to get value from BytesColumnVector

2017-12-12 Thread Colin Ma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Ma updated HIVE-18209:

Attachment: HIVE-18209.003.patch

> Fix API call in VectorizedListColumnReader to get value from BytesColumnVector
> --
>
> Key: HIVE-18209
> URL: https://issues.apache.org/jira/browse/HIVE-18209
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Colin Ma
>Assignee: Colin Ma
> Attachments: HIVE-18209.001.patch, HIVE-18209.002.patch, 
> HIVE-18209.003.patch
>
>
> With the API BytesColumnVector.setVal(), the isRepeating attribute can't be 
> set correctly if ListColumnVector.child is BytesColumnVector. 
> BytesColumnVector.setRef() should be used to avoid this problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18250) CBO gets turned off with duplicates in RR error

2017-12-12 Thread Ashutosh Chauhan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288558#comment-16288558
 ] 

Ashutosh Chauhan commented on HIVE-18250:
-

+1

> CBO gets turned off with duplicates in RR error
> ---
>
> Key: HIVE-18250
> URL: https://issues.apache.org/jira/browse/HIVE-18250
> Project: Hive
>  Issue Type: Bug
>  Components: CBO, Query Planning
>Affects Versions: 2.0.0, 2.1.0, 2.2.0, 2.3.0
>Reporter: Ashutosh Chauhan
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-18250.01.patch, HIVE-18250.02.patch
>
>
> {code}
>  create table t1 (a int);
> explain select t1.a as a1, min(t1.a) as a from t1 group by t1.a;
> {code}
> CBO gets turned off with:
> {code}
> WARN [2e80e34e-dc46-49cf-88bf-2c24c0262d41 main] parse.RowResolver: Found 
> duplicate column alias in RR: null.a => {null, a1, _col0: int} adding null.a 
> => {null, null, _col1: int}
> 2017-12-07T15:27:47,651 ERROR [2e80e34e-dc46-49cf-88bf-2c24c0262d41 main] 
> parse.CalcitePlanner: CBO failed, skipping CBO.
> org.apache.hadoop.hive.ql.optimizer.calcite.CalciteSemanticException: Cannot 
> add column to RR: null.a => _col1: int due to duplication, see previous 
> warnings
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genSelectLogicalPlan(CalcitePlanner.java:3985)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genLogicalPlan(CalcitePlanner.java:4313)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1392)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1322)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> {code}
> After that non-CBO path completes the query.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17794) HCatLoader breaks when a member is added to a struct-column of a table

2017-12-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288539#comment-16288539
 ] 

Hive QA commented on HIVE-17794:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
11s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 47s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  
xml  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 095e6bf |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8203/yetus/patch-asflicense-problems.txt
 |
| modules | C: hcatalog/core hcatalog/hcatalog-pig-adapter 
hcatalog/webhcat/java-client U: hcatalog |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8203/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> HCatLoader breaks when a member is added to a struct-column of a table
> --
>
> Key: HIVE-17794
> URL: https://issues.apache.org/jira/browse/HIVE-17794
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 2.2.0, 3.0.0, 2.4.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Attachments: HIVE-17794.02.patch, HIVE-17794.1.patch
>
>
> When a table's schema evolves to add a new member to a struct column, Hive 
> queries work fine, but {{HCatLoader}} breaks with the following trace:
> {noformat}
> TaskAttempt 1 failed, info=
>  Error: Failure while running 
> task:org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: kite_composites_with_segments: Local Rearrange
>  tuple
> {chararray}(false) - scope-555-> scope-974 Operator Key: scope-555): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup: New For Each(false,false)
>  bag
> - scope-548 Operator Key: scope-548): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup_filtered: Filter
> 

[jira] [Commented] (HIVE-17710) LockManager should only lock Managed tables

2017-12-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288524#comment-16288524
 ] 

Hive QA commented on HIVE-17710:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12901720/HIVE-17710.04.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 15 failed/errored test(s), 11528 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_ppd_decimal] 
(batchId=9)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2]
 (batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[quotedid_smb]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=160)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=93)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_10]
 (batchId=138)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketsortoptimize_insert_7]
 (batchId=128)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=113)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=226)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8202/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8202/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8202/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 15 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12901720 - PreCommit-HIVE-Build

> LockManager should only lock Managed tables
> ---
>
> Key: HIVE-17710
> URL: https://issues.apache.org/jira/browse/HIVE-17710
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-17710.01.patch, HIVE-17710.02.patch, 
> HIVE-17710.03.patch, HIVE-17710.04.patch, HIVE-17710.04.patch
>
>
> should the LM take locks on External tables?  Out of the box Acid LM is being 
> conservative which can cause throughput issues.
> A better strategy may be to exclude External tables but enable explicit "lock 
> table/partition " command (only on external tables?).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18201) Disable XPROD_EDGE for sq_count_check() created for scalar subqueries

2017-12-12 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-18201:

Assignee: Ashutosh Chauhan
  Status: Patch Available  (was: Open)

> Disable XPROD_EDGE for sq_count_check()  created for scalar subqueries
> --
>
> Key: HIVE-18201
> URL: https://issues.apache.org/jira/browse/HIVE-18201
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Nita Dembla
>Assignee: Ashutosh Chauhan
> Attachments: HIVE-18201.1.patch, query6.explain2.out
>
>
> sq_count_check()  will either return an error at runtime or a single row. In 
> case of query6, the subquery has avg() function that should return a single 
> row. Attaching the explain. 
> This does not need an x-prod, because it is not useful to shuffle the big 
> table side for a cross-product against 1 row.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18201) Disable XPROD_EDGE for sq_count_check() created for scalar subqueries

2017-12-12 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-18201:

Attachment: HIVE-18201.1.patch

Patch which makes Hive to decide running cross product using broadcast vs xprod 
edge using a config.

> Disable XPROD_EDGE for sq_count_check()  created for scalar subqueries
> --
>
> Key: HIVE-18201
> URL: https://issues.apache.org/jira/browse/HIVE-18201
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Nita Dembla
> Attachments: HIVE-18201.1.patch, query6.explain2.out
>
>
> sq_count_check()  will either return an error at runtime or a single row. In 
> case of query6, the subquery has avg() function that should return a single 
> row. Attaching the explain. 
> This does not need an x-prod, because it is not useful to shuffle the big 
> table side for a cross-product against 1 row.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18250) CBO gets turned off with duplicates in RR error

2017-12-12 Thread Jesus Camacho Rodriguez (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288504#comment-16288504
 ] 

Jesus Camacho Rodriguez commented on HIVE-18250:


[~ashutoshc], could you take a look? Thanks

https://reviews.apache.org/r/64524/

> CBO gets turned off with duplicates in RR error
> ---
>
> Key: HIVE-18250
> URL: https://issues.apache.org/jira/browse/HIVE-18250
> Project: Hive
>  Issue Type: Bug
>  Components: CBO, Query Planning
>Affects Versions: 2.0.0, 2.1.0, 2.2.0, 2.3.0
>Reporter: Ashutosh Chauhan
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-18250.01.patch, HIVE-18250.02.patch
>
>
> {code}
>  create table t1 (a int);
> explain select t1.a as a1, min(t1.a) as a from t1 group by t1.a;
> {code}
> CBO gets turned off with:
> {code}
> WARN [2e80e34e-dc46-49cf-88bf-2c24c0262d41 main] parse.RowResolver: Found 
> duplicate column alias in RR: null.a => {null, a1, _col0: int} adding null.a 
> => {null, null, _col1: int}
> 2017-12-07T15:27:47,651 ERROR [2e80e34e-dc46-49cf-88bf-2c24c0262d41 main] 
> parse.CalcitePlanner: CBO failed, skipping CBO.
> org.apache.hadoop.hive.ql.optimizer.calcite.CalciteSemanticException: Cannot 
> add column to RR: null.a => _col1: int due to duplication, see previous 
> warnings
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genSelectLogicalPlan(CalcitePlanner.java:3985)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genLogicalPlan(CalcitePlanner.java:4313)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1392)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1322)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> {code}
> After that non-CBO path completes the query.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18068) Upgrade to Calcite 1.15

2017-12-12 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-18068:
---
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Most recent fails cannot be reproduced locally. Pushed to master, thanks for 
reviewing [~ashutoshc]!

> Upgrade to Calcite 1.15
> ---
>
> Key: HIVE-18068
> URL: https://issues.apache.org/jira/browse/HIVE-18068
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Fix For: 3.0.0
>
> Attachments: HIVE-18068.03.patch, HIVE-18068.04.patch, 
> HIVE-18068.05.patch, HIVE-18068.06.patch, HIVE-18068.2.patch, HIVE-18068.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17710) LockManager should only lock Managed tables

2017-12-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288471#comment-16288471
 ] 

Hive QA commented on HIVE-17710:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
35s{color} | {color:red} ql: The patch generated 12 new + 167 unchanged - 1 
fixed = 179 total (was 168) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 8f1335d |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8202/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8202/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> LockManager should only lock Managed tables
> ---
>
> Key: HIVE-17710
> URL: https://issues.apache.org/jira/browse/HIVE-17710
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-17710.01.patch, HIVE-17710.02.patch, 
> HIVE-17710.03.patch, HIVE-17710.04.patch, HIVE-17710.04.patch
>
>
> should the LM take locks on External tables?  Out of the box Acid LM is being 
> conservative which can cause throughput issues.
> A better strategy may be to exclude External tables but enable explicit "lock 
> table/partition " command (only on external tables?).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18250) CBO gets turned off with duplicates in RR error

2017-12-12 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-18250:
---
Attachment: HIVE-18250.02.patch

> CBO gets turned off with duplicates in RR error
> ---
>
> Key: HIVE-18250
> URL: https://issues.apache.org/jira/browse/HIVE-18250
> Project: Hive
>  Issue Type: Bug
>  Components: CBO, Query Planning
>Affects Versions: 2.0.0, 2.1.0, 2.2.0, 2.3.0
>Reporter: Ashutosh Chauhan
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-18250.01.patch, HIVE-18250.02.patch
>
>
> {code}
>  create table t1 (a int);
> explain select t1.a as a1, min(t1.a) as a from t1 group by t1.a;
> {code}
> CBO gets turned off with:
> {code}
> WARN [2e80e34e-dc46-49cf-88bf-2c24c0262d41 main] parse.RowResolver: Found 
> duplicate column alias in RR: null.a => {null, a1, _col0: int} adding null.a 
> => {null, null, _col1: int}
> 2017-12-07T15:27:47,651 ERROR [2e80e34e-dc46-49cf-88bf-2c24c0262d41 main] 
> parse.CalcitePlanner: CBO failed, skipping CBO.
> org.apache.hadoop.hive.ql.optimizer.calcite.CalciteSemanticException: Cannot 
> add column to RR: null.a => _col1: int due to duplication, see previous 
> warnings
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genSelectLogicalPlan(CalcitePlanner.java:3985)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genLogicalPlan(CalcitePlanner.java:4313)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1392)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1322)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> {code}
> After that non-CBO path completes the query.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18267) LLAP: Eagerly allocate cache arenas

2017-12-12 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288451#comment-16288451
 ] 

Prasanth Jayachandran commented on HIVE-18267:
--

cc/ [~sershe]

> LLAP: Eagerly allocate cache arenas
> ---
>
> Key: HIVE-18267
> URL: https://issues.apache.org/jira/browse/HIVE-18267
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>
> When LLAP starts it will be good to eagerly allocate all arenas required by 
> cache allocator to avoid OOM at runtime. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18068) Upgrade to Calcite 1.15

2017-12-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288434#comment-16288434
 ] 

Hive QA commented on HIVE-18068:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12901560/HIVE-18068.06.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 20 failed/errored test(s), 11527 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ptf_orcfile] 
(batchId=249)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] 
(batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=12)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_ppd_decimal] 
(batchId=9)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2]
 (batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[quotedid_smb]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=160)
org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat]
 (batchId=178)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_1]
 (batchId=93)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_10]
 (batchId=138)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketsortoptimize_insert_7]
 (batchId=128)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] 
(batchId=120)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=113)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=226)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8201/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8201/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8201/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 20 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12901560 - PreCommit-HIVE-Build

> Upgrade to Calcite 1.15
> ---
>
> Key: HIVE-18068
> URL: https://issues.apache.org/jira/browse/HIVE-18068
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-18068.03.patch, HIVE-18068.04.patch, 
> HIVE-18068.05.patch, HIVE-18068.06.patch, HIVE-18068.2.patch, HIVE-18068.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18068) Upgrade to Calcite 1.15

2017-12-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288410#comment-16288410
 ] 

Hive QA commented on HIVE-18068:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
45s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} ql: The patch generated 0 new + 335 unchanged - 2 
fixed = 335 total (was 337) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} The patch accumulo-handler passed checkstyle {color} 
|
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} The patch hbase-handler passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
47s{color} | {color:green} root: The patch generated 0 new + 335 unchanged - 2 
fixed = 335 total (was 337) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
40s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
11s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 46m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  findbugs  
checkstyle  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / fe4bd04 |
| Default Java | 1.8.0_111 |
| modules | C: ql accumulo-handler hbase-handler . U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8201/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Upgrade to Calcite 1.15
> ---
>
> Key: HIVE-18068
> URL: https://issues.apache.org/jira/browse/HIVE-18068
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-18068.03.patch, HIVE-18068.04.patch, 
> HIVE-18068.05.patch, HIVE-18068.06.patch, HIVE-18068.2.patch, HIVE-18068.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18237) missing results for insert_only table after DP insert

2017-12-12 Thread Steve Yeom (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Yeom updated HIVE-18237:
--
Issue Type: Sub-task  (was: Bug)
Parent: HIVE-18052

> missing results for insert_only table after DP insert
> -
>
> Key: HIVE-18237
> URL: https://issues.apache.org/jira/browse/HIVE-18237
> Project: Hive
>  Issue Type: Sub-task
>  Components: Transactions
>Reporter: Zoltan Haindrich
>Assignee: Steve Yeom
> Attachments: HIVE-18237.01.patch
>
>
> {code}
> set hive.stats.column.autogather=false;
> set hive.exec.dynamic.partition.mode=nonstrict;
> set hive.exec.max.dynamic.partitions.pernode=200;
> set hive.exec.max.dynamic.partitions=200;
> set hive.support.concurrency=true;
> set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
> create table i0 (p int,v int);
> insert into i0 values
> (0,0),
> (2,2),
> (3,3);
> create table p0 (v int) partitioned by (p int) stored as orc 
>   tblproperties ("transactional"="true", 
> "transactional_properties"="insert_only");
> explain insert overwrite table p0 partition (p) select * from i0 where v < 3;
> insert overwrite table p0 partition (p) select * from i0 where v < 3;
> select count(*) from p0 where v!=1;
> {code}
> The table p0 should contain {{2}} rows at this point; but the result is {{0}}.
> * seems to be specific to insert_only tables
> * the existing data appears if an {{insert into}} is executed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HIVE-18237) missing results for insert_only table after DP insert

2017-12-12 Thread Steve Yeom (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Yeom reassigned HIVE-18237:
-

Assignee: Steve Yeom

> missing results for insert_only table after DP insert
> -
>
> Key: HIVE-18237
> URL: https://issues.apache.org/jira/browse/HIVE-18237
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Reporter: Zoltan Haindrich
>Assignee: Steve Yeom
> Attachments: HIVE-18237.01.patch
>
>
> {code}
> set hive.stats.column.autogather=false;
> set hive.exec.dynamic.partition.mode=nonstrict;
> set hive.exec.max.dynamic.partitions.pernode=200;
> set hive.exec.max.dynamic.partitions=200;
> set hive.support.concurrency=true;
> set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
> create table i0 (p int,v int);
> insert into i0 values
> (0,0),
> (2,2),
> (3,3);
> create table p0 (v int) partitioned by (p int) stored as orc 
>   tblproperties ("transactional"="true", 
> "transactional_properties"="insert_only");
> explain insert overwrite table p0 partition (p) select * from i0 where v < 3;
> insert overwrite table p0 partition (p) select * from i0 where v < 3;
> select count(*) from p0 where v!=1;
> {code}
> The table p0 should contain {{2}} rows at this point; but the result is {{0}}.
> * seems to be specific to insert_only tables
> * the existing data appears if an {{insert into}} is executed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HIVE-17002) decimal (binary) is not working when creating external table for hbase

2017-12-12 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam resolved HIVE-17002.
--
   Resolution: Duplicate
Fix Version/s: 3.0.0

> decimal (binary) is not working when creating external table for hbase
> --
>
> Key: HIVE-17002
> URL: https://issues.apache.org/jira/browse/HIVE-17002
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.1
> Environment: HBase 1.2.0, Hive 2.1.1
>Reporter: Artur Tamazian
>Assignee: Naveen Gangam
> Fix For: 3.0.0
>
>
> I have a table in Hbase which has a column stored using 
> Bytes.toBytes((BigDecimal) value). Hbase version is 1.2.0
> I'm creating an external table in hive to access it like this:
> {noformat}
> create external table `Users`(key int, ..., `example_column` decimal) 
> stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' 
> with serdeproperties ("hbase.columns.mapping" = ":key, 
> db:example_column") 
> tblproperties("hbase.table.name" = 
> "Users","hbase.table.default.storage.type" = "binary");
> {noformat}
> Table is created without errors. After that I try running "select * from 
> users;" and see this error:
> {noformat}
> org.apache.hive.service.cli.HiveSQLException:java.io.IOException: 
> java.lang.RuntimeException: java.lang.RuntimeException: Hive Internal Error: 
> no LazyObject for 
> org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyHiveDecimalObjectInspector@1f18cebb:25:24
>   
>
> org.apache.hive.service.cli.operation.SQLOperation:getNextRowSet:SQLOperation.java:484
>   
>
> org.apache.hive.service.cli.operation.OperationManager:getOperationNextRowSet:OperationManager.java:308
>   
>
> org.apache.hive.service.cli.session.HiveSessionImpl:fetchResults:HiveSessionImpl.java:847
>   
>sun.reflect.GeneratedMethodAccessor11:invoke::-1  
>
> sun.reflect.DelegatingMethodAccessorImpl:invoke:DelegatingMethodAccessorImpl.java:43
>   
>java.lang.reflect.Method:invoke:Method.java:498  
>
> org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:78
>   
>
> org.apache.hive.service.cli.session.HiveSessionProxy:access$000:HiveSessionProxy.java:36
>   
>
> org.apache.hive.service.cli.session.HiveSessionProxy$1:run:HiveSessionProxy.java:63
>   
>java.security.AccessController:doPrivileged:AccessController.java:-2  
>javax.security.auth.Subject:doAs:Subject.java:422  
>
> org.apache.hadoop.security.UserGroupInformation:doAs:UserGroupInformation.java:1698
>   
>
> org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:59
>   
>com.sun.proxy.$Proxy33:fetchResults::-1  
>org.apache.hive.service.cli.CLIService:fetchResults:CLIService.java:504  
>
> org.apache.hive.service.cli.thrift.ThriftCLIService:FetchResults:ThriftCLIService.java:698
>   
>
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults:getResult:TCLIService.java:1717
>   
>
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults:getResult:TCLIService.java:1702
>   
>org.apache.thrift.ProcessFunction:process:ProcessFunction.java:39  
>org.apache.thrift.TBaseProcessor:process:TBaseProcessor.java:39  
>
> org.apache.hive.service.auth.TSetIpAddressProcessor:process:TSetIpAddressProcessor.java:56
>   
>
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess:run:TThreadPoolServer.java:286
>   
>
> java.util.concurrent.ThreadPoolExecutor:runWorker:ThreadPoolExecutor.java:1142
>   
>
> java.util.concurrent.ThreadPoolExecutor$Worker:run:ThreadPoolExecutor.java:617
>   
>java.lang.Thread:run:Thread.java:748  
>*java.io.IOException:java.lang.RuntimeException: 
> java.lang.RuntimeException: Hive Internal Error: no LazyObject for 
> org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyHiveDecimalObjectInspector@1f18cebb:27:2
>   
>org.apache.hadoop.hive.ql.exec.FetchTask:fetch:FetchTask.java:164  
>org.apache.hadoop.hive.ql.Driver:getResults:Driver.java:2098  
>
> org.apache.hive.service.cli.operation.SQLOperation:getNextRowSet:SQLOperation.java:479
>   
>*java.lang.RuntimeException:java.lang.RuntimeException: Hive Internal 
> Error: no LazyObject for 
> org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyHiveDecimalObjectInspector@1f18cebb:43:16
>   
>
> org.apache.hadoop.hive.serde2.lazy.LazyStruct:initLazyFields:LazyStruct.java:172
>   
>org.apache.hadoop.hive.hbase.LazyHBaseRow:initFields:LazyHBaseRow.java:122 
>  
>org.apache.hadoop.hive.hbase.LazyHBaseRow:getField:LazyHBaseRow.java:116  
>
> org.apache.hadoop.hive.serde2.lazy.objectinspector.LazySimpleStructObjectInspector:getStructFieldData:LazySimpleStructObjectInspector.java:128
>   
>
> 

[jira] [Commented] (HIVE-17002) decimal (binary) is not working when creating external table for hbase

2017-12-12 Thread Naveen Gangam (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288377#comment-16288377
 ] 

Naveen Gangam commented on HIVE-17002:
--

[~arturt] Thanks for confirming. In that case, I will close this jira as 
duplicate of HIVE-15883 as I just committed the fix to master. Let me know if 
you have any questions. Thanks

> decimal (binary) is not working when creating external table for hbase
> --
>
> Key: HIVE-17002
> URL: https://issues.apache.org/jira/browse/HIVE-17002
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.1
> Environment: HBase 1.2.0, Hive 2.1.1
>Reporter: Artur Tamazian
>Assignee: Naveen Gangam
>
> I have a table in Hbase which has a column stored using 
> Bytes.toBytes((BigDecimal) value). Hbase version is 1.2.0
> I'm creating an external table in hive to access it like this:
> {noformat}
> create external table `Users`(key int, ..., `example_column` decimal) 
> stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' 
> with serdeproperties ("hbase.columns.mapping" = ":key, 
> db:example_column") 
> tblproperties("hbase.table.name" = 
> "Users","hbase.table.default.storage.type" = "binary");
> {noformat}
> Table is created without errors. After that I try running "select * from 
> users;" and see this error:
> {noformat}
> org.apache.hive.service.cli.HiveSQLException:java.io.IOException: 
> java.lang.RuntimeException: java.lang.RuntimeException: Hive Internal Error: 
> no LazyObject for 
> org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyHiveDecimalObjectInspector@1f18cebb:25:24
>   
>
> org.apache.hive.service.cli.operation.SQLOperation:getNextRowSet:SQLOperation.java:484
>   
>
> org.apache.hive.service.cli.operation.OperationManager:getOperationNextRowSet:OperationManager.java:308
>   
>
> org.apache.hive.service.cli.session.HiveSessionImpl:fetchResults:HiveSessionImpl.java:847
>   
>sun.reflect.GeneratedMethodAccessor11:invoke::-1  
>
> sun.reflect.DelegatingMethodAccessorImpl:invoke:DelegatingMethodAccessorImpl.java:43
>   
>java.lang.reflect.Method:invoke:Method.java:498  
>
> org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:78
>   
>
> org.apache.hive.service.cli.session.HiveSessionProxy:access$000:HiveSessionProxy.java:36
>   
>
> org.apache.hive.service.cli.session.HiveSessionProxy$1:run:HiveSessionProxy.java:63
>   
>java.security.AccessController:doPrivileged:AccessController.java:-2  
>javax.security.auth.Subject:doAs:Subject.java:422  
>
> org.apache.hadoop.security.UserGroupInformation:doAs:UserGroupInformation.java:1698
>   
>
> org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:59
>   
>com.sun.proxy.$Proxy33:fetchResults::-1  
>org.apache.hive.service.cli.CLIService:fetchResults:CLIService.java:504  
>
> org.apache.hive.service.cli.thrift.ThriftCLIService:FetchResults:ThriftCLIService.java:698
>   
>
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults:getResult:TCLIService.java:1717
>   
>
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults:getResult:TCLIService.java:1702
>   
>org.apache.thrift.ProcessFunction:process:ProcessFunction.java:39  
>org.apache.thrift.TBaseProcessor:process:TBaseProcessor.java:39  
>
> org.apache.hive.service.auth.TSetIpAddressProcessor:process:TSetIpAddressProcessor.java:56
>   
>
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess:run:TThreadPoolServer.java:286
>   
>
> java.util.concurrent.ThreadPoolExecutor:runWorker:ThreadPoolExecutor.java:1142
>   
>
> java.util.concurrent.ThreadPoolExecutor$Worker:run:ThreadPoolExecutor.java:617
>   
>java.lang.Thread:run:Thread.java:748  
>*java.io.IOException:java.lang.RuntimeException: 
> java.lang.RuntimeException: Hive Internal Error: no LazyObject for 
> org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyHiveDecimalObjectInspector@1f18cebb:27:2
>   
>org.apache.hadoop.hive.ql.exec.FetchTask:fetch:FetchTask.java:164  
>org.apache.hadoop.hive.ql.Driver:getResults:Driver.java:2098  
>
> org.apache.hive.service.cli.operation.SQLOperation:getNextRowSet:SQLOperation.java:479
>   
>*java.lang.RuntimeException:java.lang.RuntimeException: Hive Internal 
> Error: no LazyObject for 
> org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyHiveDecimalObjectInspector@1f18cebb:43:16
>   
>
> org.apache.hadoop.hive.serde2.lazy.LazyStruct:initLazyFields:LazyStruct.java:172
>   
>org.apache.hadoop.hive.hbase.LazyHBaseRow:initFields:LazyHBaseRow.java:122 
>  
>org.apache.hadoop.hive.hbase.LazyHBaseRow:getField:LazyHBaseRow.java:116  
>
> 

[jira] [Updated] (HIVE-15883) HBase mapped table in Hive insert fail for decimal

2017-12-12 Thread Naveen Gangam (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Naveen Gangam updated HIVE-15883:
-
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

> HBase mapped table in Hive insert fail for decimal
> --
>
> Key: HIVE-15883
> URL: https://issues.apache.org/jira/browse/HIVE-15883
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Fix For: 3.0.0
>
> Attachments: HIVE-15883.1.patch, HIVE-15883.1.patch, HIVE-15883.patch
>
>
> CREATE TABLE hbase_table (
> id int,
> balance decimal(15,2))
> ROW FORMAT DELIMITED
> COLLECTION ITEMS TERMINATED BY '~'
> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
> WITH SERDEPROPERTIES (
> "hbase.columns.mapping"=":key,cf:balance#b");
> insert into hbase_table values (1,1);
> 
> Diagnostic Messages for this Task:
> Error: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row {"tmp_values_col1":"1","tmp_values_col2":"1"}
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:179)
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1783)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row {"tmp_values_col1":"1","tmp_values_col2":"1"}
> at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:507)
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:170)
> ... 8 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> org.apache.hadoop.hive.serde2.SerDeException: java.lang.RuntimeException: 
> Hive internal error.
> at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:733)
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
> at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
> at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:97)
> at 
> org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:157)
> at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:497)
> ... 9 more
> Caused by: org.apache.hadoop.hive.serde2.SerDeException: 
> java.lang.RuntimeException: Hive internal error.
> at 
> org.apache.hadoop.hive.hbase.HBaseSerDe.serialize(HBaseSerDe.java:286)
> at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:668)
> ... 15 more
> Caused by: java.lang.RuntimeException: Hive internal error.
> at 
> org.apache.hadoop.hive.serde2.lazy.LazyUtils.writePrimitive(LazyUtils.java:328)
> at 
> org.apache.hadoop.hive.hbase.HBaseRowSerializer.serialize(HBaseRowSerializer.java:220)
> at 
> org.apache.hadoop.hive.hbase.HBaseRowSerializer.serializeField(HBaseRowSerializer.java:194)
> at 
> org.apache.hadoop.hive.hbase.HBaseRowSerializer.serialize(HBaseRowSerializer.java:118)
> at 
> org.apache.hadoop.hive.hbase.HBaseSerDe.serialize(HBaseSerDe.java:282)
> ... 16 more 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-15883) HBase mapped table in Hive insert fail for decimal

2017-12-12 Thread Naveen Gangam (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288366#comment-16288366
 ] 

Naveen Gangam commented on HIVE-15883:
--

Thanks for the review [~ashutoshc] and [~aihuaxu]. Fix has been pushed to 
master.

> HBase mapped table in Hive insert fail for decimal
> --
>
> Key: HIVE-15883
> URL: https://issues.apache.org/jira/browse/HIVE-15883
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 1.1.0
>Reporter: Naveen Gangam
>Assignee: Naveen Gangam
> Fix For: 3.0.0
>
> Attachments: HIVE-15883.1.patch, HIVE-15883.1.patch, HIVE-15883.patch
>
>
> CREATE TABLE hbase_table (
> id int,
> balance decimal(15,2))
> ROW FORMAT DELIMITED
> COLLECTION ITEMS TERMINATED BY '~'
> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
> WITH SERDEPROPERTIES (
> "hbase.columns.mapping"=":key,cf:balance#b");
> insert into hbase_table values (1,1);
> 
> Diagnostic Messages for this Task:
> Error: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while 
> processing row {"tmp_values_col1":"1","tmp_values_col2":"1"}
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:179)
> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:453)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1783)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing row {"tmp_values_col1":"1","tmp_values_col2":"1"}
> at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:507)
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:170)
> ... 8 more
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: 
> org.apache.hadoop.hive.serde2.SerDeException: java.lang.RuntimeException: 
> Hive internal error.
> at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:733)
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
> at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
> at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:815)
> at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.processOp(TableScanOperator.java:97)
> at 
> org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:157)
> at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:497)
> ... 9 more
> Caused by: org.apache.hadoop.hive.serde2.SerDeException: 
> java.lang.RuntimeException: Hive internal error.
> at 
> org.apache.hadoop.hive.hbase.HBaseSerDe.serialize(HBaseSerDe.java:286)
> at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:668)
> ... 15 more
> Caused by: java.lang.RuntimeException: Hive internal error.
> at 
> org.apache.hadoop.hive.serde2.lazy.LazyUtils.writePrimitive(LazyUtils.java:328)
> at 
> org.apache.hadoop.hive.hbase.HBaseRowSerializer.serialize(HBaseRowSerializer.java:220)
> at 
> org.apache.hadoop.hive.hbase.HBaseRowSerializer.serializeField(HBaseRowSerializer.java:194)
> at 
> org.apache.hadoop.hive.hbase.HBaseRowSerializer.serialize(HBaseRowSerializer.java:118)
> at 
> org.apache.hadoop.hive.hbase.HBaseSerDe.serialize(HBaseSerDe.java:282)
> ... 16 more 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18250) CBO gets turned off with duplicates in RR error

2017-12-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288334#comment-16288334
 ] 

Hive QA commented on HIVE-18250:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12901596/HIVE-18250.01.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 19 failed/errored test(s), 11528 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[complex_alias] 
(batchId=17)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[order3] (batchId=64)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_ppd_decimal] 
(batchId=9)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=160)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_1]
 (batchId=93)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=113)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query19] 
(batchId=248)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query55] 
(batchId=248)
org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query71] 
(batchId=248)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query19] 
(batchId=246)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query55] 
(batchId=246)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query71] 
(batchId=246)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=226)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8200/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8200/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8200/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 19 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12901596 - PreCommit-HIVE-Build

> CBO gets turned off with duplicates in RR error
> ---
>
> Key: HIVE-18250
> URL: https://issues.apache.org/jira/browse/HIVE-18250
> Project: Hive
>  Issue Type: Bug
>  Components: CBO, Query Planning
>Affects Versions: 2.0.0, 2.1.0, 2.2.0, 2.3.0
>Reporter: Ashutosh Chauhan
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-18250.01.patch
>
>
> {code}
>  create table t1 (a int);
> explain select t1.a as a1, min(t1.a) as a from t1 group by t1.a;
> {code}
> CBO gets turned off with:
> {code}
> WARN [2e80e34e-dc46-49cf-88bf-2c24c0262d41 main] parse.RowResolver: Found 
> duplicate column alias in RR: null.a => {null, a1, _col0: int} adding null.a 
> => {null, null, _col1: int}
> 2017-12-07T15:27:47,651 ERROR [2e80e34e-dc46-49cf-88bf-2c24c0262d41 main] 
> parse.CalcitePlanner: CBO failed, skipping CBO.
> org.apache.hadoop.hive.ql.optimizer.calcite.CalciteSemanticException: Cannot 
> add column to RR: null.a => _col1: int due to duplication, see previous 
> warnings
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genSelectLogicalPlan(CalcitePlanner.java:3985)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genLogicalPlan(CalcitePlanner.java:4313)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1392)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1322)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> {code}
> After that non-CBO path completes the query.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18230) create plan like plan, and replace plan commands for easy modification

2017-12-12 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288335#comment-16288335
 ] 

Sergey Shelukhin commented on HIVE-18230:
-

[~harishjp] can you take a look at https://reviews.apache.org/r/64555/? thanks 

> create plan like plan, and replace plan commands for easy modification
> --
>
> Key: HIVE-18230
> URL: https://issues.apache.org/jira/browse/HIVE-18230
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18230.only.nogen.patch, HIVE-18230.patch
>
>
> Given that the plan already on the cluster cannot be altered, it would be 
> helpful to have create plan like plan, and replace plan commands that would 
> make a copy to be modified, and then rename+apply the copy in place of an 
> existing plan, and rename the existing active plan with a versioned name or 
> drop it altogether.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HIVE-18266) LLAP: /system references wrong file for THP

2017-12-12 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran resolved HIVE-18266.
--
   Resolution: Fixed
Fix Version/s: 3.0.0

Committed to master. Thanks for the review!

> LLAP: /system references wrong file for THP
> ---
>
> Key: HIVE-18266
> URL: https://issues.apache.org/jira/browse/HIVE-18266
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Fix For: 3.0.0
>
> Attachments: HIVE-18266.1.patch
>
>
> copy paste error in /system endpoint. THP references same files again.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18124) clean up isAcidTable() API vs isInsertOnlyTable()

2017-12-12 Thread Eugene Koifman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-18124:
--
Attachment: HIVE-18124.05.patch

patch 5 - remove some obsolete tests.

>  clean up isAcidTable() API vs isInsertOnlyTable()
> --
>
> Key: HIVE-18124
> URL: https://issues.apache.org/jira/browse/HIVE-18124
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-18124.01.patch, HIVE-18124.02.patch, 
> HIVE-18124.03.patch, HIVE-18124.04.patch, HIVE-18124.05.patch
>
>
> With the addition of MM tables (_AcidUtils.isInsertOnlyTable(table)_) the 
> methods in AcidUtils and dependent places are very muddled.
> Need to clean it up so that there is a isTransactional(Table) that checks 
> transactional=true setting and isAcid(Table) to mean full ACID and 
> isInsertOnly(Table) to mean MM tables.
> This would accurately describe the semantics of the tables.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18230) create plan like plan, and replace plan commands for easy modification

2017-12-12 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18230:

Status: Patch Available  (was: Open)

> create plan like plan, and replace plan commands for easy modification
> --
>
> Key: HIVE-18230
> URL: https://issues.apache.org/jira/browse/HIVE-18230
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18230.only.nogen.patch, HIVE-18230.patch
>
>
> Given that the plan already on the cluster cannot be altered, it would be 
> helpful to have create plan like plan, and replace plan commands that would 
> make a copy to be modified, and then rename+apply the copy in place of an 
> existing plan, and rename the existing active plan with a versioned name or 
> drop it altogether.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18230) create plan like plan, and replace plan commands for easy modification

2017-12-12 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18230:

Attachment: HIVE-18230.patch

patch based on master, and including the generated code changes.

> create plan like plan, and replace plan commands for easy modification
> --
>
> Key: HIVE-18230
> URL: https://issues.apache.org/jira/browse/HIVE-18230
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18230.only.nogen.patch, HIVE-18230.patch
>
>
> Given that the plan already on the cluster cannot be altered, it would be 
> helpful to have create plan like plan, and replace plan commands that would 
> make a copy to be modified, and then rename+apply the copy in place of an 
> existing plan, and rename the existing active plan with a versioned name or 
> drop it altogether.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18230) create plan like plan, and replace plan commands for easy modification

2017-12-12 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18230:

Attachment: HIVE-18230.only.nogen.patch

The patch with only this jira. The main patch is on top of the disable-WM 
command JIRA, cause otherwise there are too many conflicts.

> create plan like plan, and replace plan commands for easy modification
> --
>
> Key: HIVE-18230
> URL: https://issues.apache.org/jira/browse/HIVE-18230
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18230.only.nogen.patch
>
>
> Given that the plan already on the cluster cannot be altered, it would be 
> helpful to have create plan like plan, and replace plan commands that would 
> make a copy to be modified, and then rename+apply the copy in place of an 
> existing plan, and rename the existing active plan with a versioned name or 
> drop it altogether.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18230) create plan like plan, and replace plan commands for easy modification

2017-12-12 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18230:

Attachment: (was: HIVE-18230.WIP.patch)

> create plan like plan, and replace plan commands for easy modification
> --
>
> Key: HIVE-18230
> URL: https://issues.apache.org/jira/browse/HIVE-18230
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>
> Given that the plan already on the cluster cannot be altered, it would be 
> helpful to have create plan like plan, and replace plan commands that would 
> make a copy to be modified, and then rename+apply the copy in place of an 
> existing plan, and rename the existing active plan with a versioned name or 
> drop it altogether.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18266) LLAP: /system references wrong file for THP

2017-12-12 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288302#comment-16288302
 ] 

Sergey Shelukhin commented on HIVE-18266:
-

+1

> LLAP: /system references wrong file for THP
> ---
>
> Key: HIVE-18266
> URL: https://issues.apache.org/jira/browse/HIVE-18266
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-18266.1.patch
>
>
> copy paste error in /system endpoint. THP references same files again.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18241) Query with LEFT SEMI JOIN producing wrong result

2017-12-12 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-18241:
---
Status: Patch Available  (was: Open)

Latest patch addresses review comment

> Query with LEFT SEMI JOIN producing wrong result
> 
>
> Key: HIVE-18241
> URL: https://issues.apache.org/jira/browse/HIVE-18241
> Project: Hive
>  Issue Type: Bug
>Reporter: Vineet Garg
>Assignee: Vineet Garg
> Attachments: HIVE-18241.1.patch, HIVE-18241.2.patch, 
> HIVE-18241.3.patch
>
>
> Following query produces wrong result
> {code:sql}
> select key, value from src outr left semi join (select a.key, b.value from 
> src a join (select distinct value from src) b on a.value > b.value group by 
> a.key, b.value) inr on outr.key=inr.key and outr.value=inr.value;
> {code}
> Expected result is empty set but it output bunch of rows.
> Schema for {{src}} table could be find in {{data/scripts/q_test_init.sql}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18241) Query with LEFT SEMI JOIN producing wrong result

2017-12-12 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-18241:
---
Attachment: HIVE-18241.3.patch

> Query with LEFT SEMI JOIN producing wrong result
> 
>
> Key: HIVE-18241
> URL: https://issues.apache.org/jira/browse/HIVE-18241
> Project: Hive
>  Issue Type: Bug
>Reporter: Vineet Garg
>Assignee: Vineet Garg
> Attachments: HIVE-18241.1.patch, HIVE-18241.2.patch, 
> HIVE-18241.3.patch
>
>
> Following query produces wrong result
> {code:sql}
> select key, value from src outr left semi join (select a.key, b.value from 
> src a join (select distinct value from src) b on a.value > b.value group by 
> a.key, b.value) inr on outr.key=inr.key and outr.value=inr.value;
> {code}
> Expected result is empty set but it output bunch of rows.
> Schema for {{src}} table could be find in {{data/scripts/q_test_init.sql}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18241) Query with LEFT SEMI JOIN producing wrong result

2017-12-12 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-18241:
---
Status: Open  (was: Patch Available)

> Query with LEFT SEMI JOIN producing wrong result
> 
>
> Key: HIVE-18241
> URL: https://issues.apache.org/jira/browse/HIVE-18241
> Project: Hive
>  Issue Type: Bug
>Reporter: Vineet Garg
>Assignee: Vineet Garg
> Attachments: HIVE-18241.1.patch, HIVE-18241.2.patch
>
>
> Following query produces wrong result
> {code:sql}
> select key, value from src outr left semi join (select a.key, b.value from 
> src a join (select distinct value from src) b on a.value > b.value group by 
> a.key, b.value) inr on outr.key=inr.key and outr.value=inr.value;
> {code}
> Expected result is empty set but it output bunch of rows.
> Schema for {{src}} table could be find in {{data/scripts/q_test_init.sql}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18250) CBO gets turned off with duplicates in RR error

2017-12-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288268#comment-16288268
 ] 

Hive QA commented on HIVE-18250:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
53s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
36s{color} | {color:red} ql: The patch generated 1 new + 249 unchanged - 63 
fixed = 250 total (was 312) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 1320d2b |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8200/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8200/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> CBO gets turned off with duplicates in RR error
> ---
>
> Key: HIVE-18250
> URL: https://issues.apache.org/jira/browse/HIVE-18250
> Project: Hive
>  Issue Type: Bug
>  Components: CBO, Query Planning
>Affects Versions: 2.0.0, 2.1.0, 2.2.0, 2.3.0
>Reporter: Ashutosh Chauhan
>Assignee: Jesus Camacho Rodriguez
> Attachments: HIVE-18250.01.patch
>
>
> {code}
>  create table t1 (a int);
> explain select t1.a as a1, min(t1.a) as a from t1 group by t1.a;
> {code}
> CBO gets turned off with:
> {code}
> WARN [2e80e34e-dc46-49cf-88bf-2c24c0262d41 main] parse.RowResolver: Found 
> duplicate column alias in RR: null.a => {null, a1, _col0: int} adding null.a 
> => {null, null, _col1: int}
> 2017-12-07T15:27:47,651 ERROR [2e80e34e-dc46-49cf-88bf-2c24c0262d41 main] 
> parse.CalcitePlanner: CBO failed, skipping CBO.
> org.apache.hadoop.hive.ql.optimizer.calcite.CalciteSemanticException: Cannot 
> add column to RR: null.a => _col1: int due to duplication, see previous 
> warnings
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genSelectLogicalPlan(CalcitePlanner.java:3985)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.genLogicalPlan(CalcitePlanner.java:4313)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1392)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1322)
>  

[jira] [Updated] (HIVE-18208) SMB Join : Fix the unit tests to run SMB Joins.

2017-12-12 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-18208:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Committed to master

> SMB Join : Fix the unit tests to run SMB Joins.
> ---
>
> Key: HIVE-18208
> URL: https://issues.apache.org/jira/browse/HIVE-18208
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
> Fix For: 3.0.0
>
> Attachments: HIVE-18208.1.patch, HIVE-18208.2.patch, 
> HIVE-18208.3.patch
>
>
> Most of the SMB Join tests are actually not creating SMB Joins. Need them to 
> test the intended join.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-15393) Update Guava version

2017-12-12 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated HIVE-15393:
--
Attachment: HIVE-15393.5.patch

Update to 21 version

> Update Guava version
> 
>
> Key: HIVE-15393
> URL: https://issues.apache.org/jira/browse/HIVE-15393
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: slim bouguerra
>Assignee: Ashutosh Chauhan
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HIVE-15393.2.patch, HIVE-15393.3.patch, 
> HIVE-15393.5.patch, HIVE-15393.patch
>
>
> Druid base code is using newer version of guava 16.0.1 that is not compatible 
> with the current version used by Hive.
> FYI Hadoop project is moving to Guava 18 not sure if it is better to move to 
> guava 18 or even 19.
> https://issues.apache.org/jira/browse/HADOOP-10101



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-15393) Update Guava version

2017-12-12 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated HIVE-15393:
--
Attachment: (was: HIVE-15393.4.patch)

> Update Guava version
> 
>
> Key: HIVE-15393
> URL: https://issues.apache.org/jira/browse/HIVE-15393
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: slim bouguerra
>Assignee: Ashutosh Chauhan
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HIVE-15393.2.patch, HIVE-15393.3.patch, HIVE-15393.patch
>
>
> Druid base code is using newer version of guava 16.0.1 that is not compatible 
> with the current version used by Hive.
> FYI Hadoop project is moving to Guava 18 not sure if it is better to move to 
> guava 18 or even 19.
> https://issues.apache.org/jira/browse/HADOOP-10101



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-14498) Freshness period for query rewriting using materialized views

2017-12-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288244#comment-16288244
 ] 

Hive QA commented on HIVE-14498:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12901420/HIVE-14498.patch

{color:green}SUCCESS:{color} +1 due to 7 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 281 failed/errored test(s), 11530 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=246)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] 
(batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_view] (batchId=40)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[create_view_partitioned] 
(batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cteViews] (batchId=76)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_authorization_sqlstd]
 (batchId=47)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_create]
 (batchId=71)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_describe]
 (batchId=68)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_drop] 
(batchId=10)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_rewrite_ssb]
 (batchId=37)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[materialized_view_rewrite_ssb_2]
 (batchId=54)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[show_materialized_views] 
(batchId=14)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[cbo_rp_unionDistinct_2]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite]
 (batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_2]
 (batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_3]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_multi_db]
 (batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_describe]
 (batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_drop]
 (batchId=153)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_rewrite_ssb]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_rewrite_ssb_2]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[resourceplan]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_views]
 (batchId=154)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[unionDistinct_2]
 (batchId=154)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[materialized_view_drop]
 (batchId=92)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_1]
 (batchId=93)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=113)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_views] 
(batchId=110)
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testAlterPartition 
(batchId=214)
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testAlterTable 
(batchId=214)
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testAlterViewParititon
 (batchId=214)
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testComplexTable 
(batchId=214)
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testFilterLastPartition
 (batchId=214)
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testFilterSinglePartition
 (batchId=214)
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testGetSchemaWithNoClassDefFoundError
 (batchId=214)
org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testGetTableObjects 
(batchId=214)

[jira] [Updated] (HIVE-15393) Update Guava version

2017-12-12 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-15393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated HIVE-15393:
--
Attachment: HIVE-15393.4.patch

> Update Guava version
> 
>
> Key: HIVE-15393
> URL: https://issues.apache.org/jira/browse/HIVE-15393
> Project: Hive
>  Issue Type: Improvement
>Affects Versions: 2.2.0
>Reporter: slim bouguerra
>Assignee: Ashutosh Chauhan
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HIVE-15393.2.patch, HIVE-15393.3.patch, 
> HIVE-15393.4.patch, HIVE-15393.patch
>
>
> Druid base code is using newer version of guava 16.0.1 that is not compatible 
> with the current version used by Hive.
> FYI Hadoop project is moving to Guava 18 not sure if it is better to move to 
> guava 18 or even 19.
> https://issues.apache.org/jira/browse/HADOOP-10101



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-14498) Freshness period for query rewriting using materialized views

2017-12-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288235#comment-16288235
 ] 

Hive QA commented on HIVE-14498:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
6s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
56s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
53s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
19s{color} | {color:red} common: The patch generated 4 new + 942 unchanged - 0 
fixed = 946 total (was 942) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
45s{color} | {color:red} standalone-metastore: The patch generated 49 new + 
3479 unchanged - 6 fixed = 3528 total (was 3485) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
53s{color} | {color:red} ql: The patch generated 7 new + 2543 unchanged - 11 
fixed = 2550 total (was 2554) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  2m 
37s{color} | {color:red} root: The patch generated 60 new + 7176 unchanged - 17 
fixed = 7236 total (was 7193) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 95 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
2s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
55s{color} | {color:red} standalone-metastore generated 2 new + 54 unchanged - 
0 fixed = 56 total (was 54) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  5m 
24s{color} | {color:red} root generated 2 new + 329 unchanged - 0 fixed = 331 
total (was 329) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / 1320d2b |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8199/yetus/diff-checkstyle-common.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8199/yetus/diff-checkstyle-standalone-metastore.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8199/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8199/yetus/diff-checkstyle-root.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8199/yetus/whitespace-eol.txt 
|
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8199/yetus/whitespace-tabs.txt
 |
| javadoc | 

[jira] [Resolved] (HIVE-18197) Fix issue with wrong segments identifier usage.

2017-12-12 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra resolved HIVE-18197.
---
Resolution: Fixed

> Fix issue with wrong segments identifier usage.
> ---
>
> Key: HIVE-18197
> URL: https://issues.apache.org/jira/browse/HIVE-18197
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: slim bouguerra
>
> We have 2 different issues, that can make checking of load status fail for 
> druid segments.
> issues are due to usage of wrong segment identifier at couple of locations.
> # We are construction the segment identifier with UTC timezone, which can be 
> wrong if the segments we built in a different timezone. The way to fix this 
> is to use the segment identifier instead of re-making it at the client side.
> # We are using outdate segments identifiers for the INSERT INTO case. The way 
> to fix this is to use the segment metadata produced by the metadata commit 
> phase.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18266) LLAP: /system references wrong file for THP

2017-12-12 Thread Prasanth Jayachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288204#comment-16288204
 ] 

Prasanth Jayachandran commented on HIVE-18266:
--

This does not require precommit tests and none of the tests gets affected by 
this change.

> LLAP: /system references wrong file for THP
> ---
>
> Key: HIVE-18266
> URL: https://issues.apache.org/jira/browse/HIVE-18266
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-18266.1.patch
>
>
> copy paste error in /system endpoint. THP references same files again.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18266) LLAP: /system references wrong file for THP

2017-12-12 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran updated HIVE-18266:
-
Attachment: HIVE-18266.1.patch

[~sershe] can you please take a look? small fix

> LLAP: /system references wrong file for THP
> ---
>
> Key: HIVE-18266
> URL: https://issues.apache.org/jira/browse/HIVE-18266
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
> Attachments: HIVE-18266.1.patch
>
>
> copy paste error in /system endpoint. THP references same files again.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HIVE-18266) LLAP: /system references wrong file for THP

2017-12-12 Thread Prasanth Jayachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prasanth Jayachandran reassigned HIVE-18266:



> LLAP: /system references wrong file for THP
> ---
>
> Key: HIVE-18266
> URL: https://issues.apache.org/jira/browse/HIVE-18266
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Prasanth Jayachandran
>
> copy paste error in /system endpoint. THP references same files again.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18263) Ptest execution are multiple times slower sometimes due to dying executor slaves

2017-12-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288151#comment-16288151
 ] 

Hive QA commented on HIVE-18263:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12901711/HIVE-18263.0.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 11499 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.org.apache.hadoop.hive.cli.TestCliDriver
 (batchId=85)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] 
(batchId=48)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] 
(batchId=12)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] 
(batchId=151)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=160)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=93)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=113)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=226)
org.apache.hive.service.cli.TestEmbeddedThriftBinaryCLIService.testExecuteStatementParallel
 (batchId=230)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8198/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8198/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8198/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 14 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12901711 - PreCommit-HIVE-Build

> Ptest execution are multiple times slower sometimes due to dying executor 
> slaves
> 
>
> Key: HIVE-18263
> URL: https://issues.apache.org/jira/browse/HIVE-18263
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Reporter: Adam Szita
>Assignee: Adam Szita
> Attachments: HIVE-18263.0.patch
>
>
> PreCommit-HIVE-Build job has been seen running very long from time to time. 
> Usually it should take about 1.5 hours, but in some cases it took over 4-5 
> hours.
> Looking in the logs of one such execution I've seen that some commands that 
> were sent to test executing slaves returned 255. Here this typically means 
> that there is unknown return code for the remote call since hiveptest-server 
> can't reach these slaves anymore.
> In the hiveptest-server logs it is seen that some slaves were killed while 
> running the job normally, and here is why:
> * Hive's ptest-server checks periodically in every 60 minutes the status of 
> slaves. It also keeps track of slaves that were terminated.
> ** If upon such check it is found that a slave that was already killed 
> ([mTerminatedHosts 
> map|https://github.com/apache/hive/blob/master/testutils/ptest2/src/main/java/org/apache/hive/ptest/execution/context/CloudExecutionContextProvider.java#L93]
>  contains its IP) is still running, it will try and terminate it again.
> * The server also maintains a file on its local FS that contains the IP of 
> hosts that were used before. (This probably for resilience reasons)
> ** This file is read when tomcat server starts and if any of the IPs in the 
> file are seen as running slaves, ptest will terminate these first so it can 
> begin with a fresh start
> ** The IPs of these terminated instances already make their way into 
> {{mTerminatedHosts}} upon initialization...
> * The cloud provider may reuse some older IPs, so it is not too rare that the 
> same IP that belonged to a terminated host is assigned to a new one.
> This is problematic: Hive ptest's slave caretaker thread kicks in every 60 
> minutes and might see a running host that has the same IP as an old slave had 
> which was terminated at startup. It will think that this host should be 
> terminated since it already tried 60 minutes ago as its IP 

[jira] [Updated] (HIVE-18208) SMB Join : Fix the unit tests to run SMB Joins.

2017-12-12 Thread Deepak Jaiswal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-18208:
--
Attachment: HIVE-18208.3.patch

Rebased the patch.

> SMB Join : Fix the unit tests to run SMB Joins.
> ---
>
> Key: HIVE-18208
> URL: https://issues.apache.org/jira/browse/HIVE-18208
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
> Attachments: HIVE-18208.1.patch, HIVE-18208.2.patch, 
> HIVE-18208.3.patch
>
>
> Most of the SMB Join tests are actually not creating SMB Joins. Need them to 
> test the intended join.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18003) add explicit jdbc connection string args for mappings

2017-12-12 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18003:

Status: Patch Available  (was: Open)

> add explicit jdbc connection string args for mappings
> -
>
> Key: HIVE-18003
> URL: https://issues.apache.org/jira/browse/HIVE-18003
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18003.01.patch, HIVE-18003.02.patch, 
> HIVE-18003.03.patch, HIVE-18003.patch, HIVE-18153.04.patch
>
>
> 1) Force using unmanaged/containers execution.
> 2) Optional - specify pool name (config setting to gate this, disabled by 
> default?).
> In phase 2 (or 4?) we might allow #2 to be used by a user to choose between 
> multiple mappings if they have multiple pools they could be mapped to (i.e. 
> to change the ordering essentially). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18203) change the way WM is enabled and allow dropping the last resource plan

2017-12-12 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18203:

Attachment: HIVE-18203.03.patch

> change the way WM is enabled and allow dropping the last resource plan
> --
>
> Key: HIVE-18203
> URL: https://issues.apache.org/jira/browse/HIVE-18203
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18203.01.patch, HIVE-18203.02.patch, 
> HIVE-18203.03.patch, HIVE-18203.patch
>
>
> Currently it's impossible to drop the last active resource plan even if WM is 
> disabled. It should be possible to deactivate the last resource plan AND 
> disable WM in the same action. Activating a resource plan should enable WM in 
> this case.
> This should interact with the WM queue config in a sensible manner.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18203) change the way WM is enabled and allow dropping the last resource plan

2017-12-12 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18203:

Status: Patch Available  (was: Open)

> change the way WM is enabled and allow dropping the last resource plan
> --
>
> Key: HIVE-18203
> URL: https://issues.apache.org/jira/browse/HIVE-18203
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18203.01.patch, HIVE-18203.02.patch, 
> HIVE-18203.03.patch, HIVE-18203.patch
>
>
> Currently it's impossible to drop the last active resource plan even if WM is 
> disabled. It should be possible to deactivate the last resource plan AND 
> disable WM in the same action. Activating a resource plan should enable WM in 
> this case.
> This should interact with the WM queue config in a sensible manner.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18203) change the way WM is enabled and allow dropping the last resource plan

2017-12-12 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18203:

Status: Open  (was: Patch Available)

> change the way WM is enabled and allow dropping the last resource plan
> --
>
> Key: HIVE-18203
> URL: https://issues.apache.org/jira/browse/HIVE-18203
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Aswathy Chellammal Sreekumar
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18203.01.patch, HIVE-18203.02.patch, 
> HIVE-18203.patch
>
>
> Currently it's impossible to drop the last active resource plan even if WM is 
> disabled. It should be possible to deactivate the last resource plan AND 
> disable WM in the same action. Activating a resource plan should enable WM in 
> this case.
> This should interact with the WM queue config in a sensible manner.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18153) refactor reopen and file management in TezTask

2017-12-12 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18153:

Attachment: HIVE-18153.05.patch

> refactor reopen and file management in TezTask
> --
>
> Key: HIVE-18153
> URL: https://issues.apache.org/jira/browse/HIVE-18153
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18153.01.patch, HIVE-18153.02.patch, 
> HIVE-18153.03.patch, HIVE-18153.04.patch, HIVE-18153.05.patch, 
> HIVE-18153.patch
>
>
> TezTask reopen relies on getting the same session object in terms of setup; 
> WM reopen returns a new session from the pool. 
> The former has the advantage of not having to reupload files and stuff... but 
> the object reuse results in a lot of ugly code, and also reopen might be 
> slower on average with the session pool than just getting a session from the 
> pool. Either WM needs to do the object-preserving reopen, or TezTask needs to 
> be refactored. It looks like DAG would have to be rebuilt to do the latter 
> because of some paths tied to a directory of the old session. Let me see if I 
> can get around that; if not we can do the former; and then if the former 
> results in too much ugly code in WM to account for object reuse for different 
> Tez client I'd do the latter anyway since it's a failure path :)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18078) WM getSession needs some retry logic

2017-12-12 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18078:

Attachment: HIVE-18078.03.patch

I hate ptest

> WM getSession needs some retry logic
> 
>
> Key: HIVE-18078
> URL: https://issues.apache.org/jira/browse/HIVE-18078
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18078.01.patch, HIVE-18078.01.patch, 
> HIVE-18078.02.patch, HIVE-18078.03.patch, HIVE-18078.only.patch, 
> HIVE-18078.patch
>
>
> When we get a bad session (e.g. no registry info because AM has gone 
> catatonic), the failure by the timeout future fails the getSession call.
> The retry model in TezTask is that it would get a session (which in original 
> model can be completely unusable, but we still get the object), and then 
> retry (reopen) if it's a lemon. If the reopen fails, we fail.
> getSession is not covered by this retry scheme, and should thus do its own 
> retries (or the retry logic needs to be changed)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18003) add explicit jdbc connection string args for mappings

2017-12-12 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18003:

Attachment: HIVE-18153.04.patch

HiveQA got killed again... this is really annoying

> add explicit jdbc connection string args for mappings
> -
>
> Key: HIVE-18003
> URL: https://issues.apache.org/jira/browse/HIVE-18003
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18003.01.patch, HIVE-18003.02.patch, 
> HIVE-18003.03.patch, HIVE-18003.patch, HIVE-18153.04.patch
>
>
> 1) Force using unmanaged/containers execution.
> 2) Optional - specify pool name (config setting to gate this, disabled by 
> default?).
> In phase 2 (or 4?) we might allow #2 to be used by a user to choose between 
> multiple mappings if they have multiple pools they could be mapped to (i.e. 
> to change the ordering essentially). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18228) Azure credential properties should be added to the HiveConf hidden list

2017-12-12 Thread Andrew Sherman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18228?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288112#comment-16288112
 ] 

Andrew Sherman commented on HIVE-18228:
---

Test failures look unconnected to this change, so I think this is ready to push 
if you agree [~pvary]

> Azure credential properties should be added to the HiveConf hidden list
> ---
>
> Key: HIVE-18228
> URL: https://issues.apache.org/jira/browse/HIVE-18228
> Project: Hive
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
> Attachments: HIVE-18228.1.patch, HIVE-18228.2.patch, 
> HIVE-18228.3.patch
>
>
> The HIVE_CONF_HIDDEN_LIST("hive.conf.hidden.list") already contains keys 
> contaiing aws credentials. The Azure properties to be added are:
> * dfs.adls.oauth2.credential
> * fs.adl.oauth2.credential



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18003) add explicit jdbc connection string args for mappings

2017-12-12 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HIVE-18003:

Status: Open  (was: Patch Available)

> add explicit jdbc connection string args for mappings
> -
>
> Key: HIVE-18003
> URL: https://issues.apache.org/jira/browse/HIVE-18003
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Attachments: HIVE-18003.01.patch, HIVE-18003.02.patch, 
> HIVE-18003.03.patch, HIVE-18003.patch
>
>
> 1) Force using unmanaged/containers execution.
> 2) Optional - specify pool name (config setting to gate this, disabled by 
> default?).
> In phase 2 (or 4?) we might allow #2 to be used by a user to choose between 
> multiple mappings if they have multiple pools they could be mapped to (i.e. 
> to change the ordering essentially). 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-17710) LockManager should only lock Managed tables

2017-12-12 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288109#comment-16288109
 ] 

Alan Gates commented on HIVE-17710:
---

+1 for patch 4.

> LockManager should only lock Managed tables
> ---
>
> Key: HIVE-17710
> URL: https://issues.apache.org/jira/browse/HIVE-17710
> Project: Hive
>  Issue Type: New Feature
>  Components: Transactions
>Reporter: Eugene Koifman
>Assignee: Eugene Koifman
> Attachments: HIVE-17710.01.patch, HIVE-17710.02.patch, 
> HIVE-17710.03.patch, HIVE-17710.04.patch, HIVE-17710.04.patch
>
>
> should the LM take locks on External tables?  Out of the box Acid LM is being 
> conservative which can cause throughput issues.
> A better strategy may be to exclude External tables but enable explicit "lock 
> table/partition " command (only on external tables?).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18054) Make Lineage work with concurrent queries on a Session

2017-12-12 Thread Andrew Sherman (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288080#comment-16288080
 ] 

Andrew Sherman commented on HIVE-18054:
---

Test failures are unconnected with this change. So I think this is ready to 
push [~stakiar] if you agree.

>  Make Lineage work with concurrent queries on a Session
> ---
>
> Key: HIVE-18054
> URL: https://issues.apache.org/jira/browse/HIVE-18054
> Project: Hive
>  Issue Type: Bug
>Reporter: Andrew Sherman
>Assignee: Andrew Sherman
> Attachments: HIVE-18054.1.patch, HIVE-18054.10.patch, 
> HIVE-18054.11.patch, HIVE-18054.12.patch, HIVE-18054.13.patch, 
> HIVE-18054.2.patch, HIVE-18054.3.patch, HIVE-18054.4.patch, 
> HIVE-18054.5.patch, HIVE-18054.6.patch, HIVE-18054.7.patch, 
> HIVE-18054.8.patch, HIVE-18054.9.patch
>
>
> A Hive Session can contain multiple concurrent sql Operations.
> Lineage is currently tracked in SessionState and is cleared when a query 
> completes. This results in Lineage for other running queries being lost.
> To fix this, move LineageState from SessionState to QueryState.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18191) Vectorization: Add validation of TableScanOperator (gather statistics) back

2017-12-12 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288074#comment-16288074
 ] 

Matt McCline commented on HIVE-18191:
-

Committed to master.

> Vectorization: Add validation of TableScanOperator (gather statistics) back
> ---
>
> Key: HIVE-18191
> URL: https://issues.apache.org/jira/browse/HIVE-18191
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-18191.01.patch, HIVE-18191.02.patch, 
> HIVE-18191.03.patch, HIVE-18191.04.patch, HIVE-18191.05.patch, 
> HIVE-18191.06.patch, HIVE-18191.07.patch, HIVE-18191.08.patch
>
>
> HIVE-17433 accidentally removed call to validateTableScanOperator.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17495) CachedStore: prewarm improvement (avoid multiple sql calls to read partition column stats), refactoring and caching some aggregate stats

2017-12-12 Thread Vaibhav Gumashta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vaibhav Gumashta updated HIVE-17495:

Attachment: HIVE-17495.6.patch

> CachedStore: prewarm improvement (avoid multiple sql calls to read partition 
> column stats), refactoring and caching some aggregate stats
> 
>
> Key: HIVE-17495
> URL: https://issues.apache.org/jira/browse/HIVE-17495
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Vaibhav Gumashta
>Assignee: Vaibhav Gumashta
> Attachments: HIVE-17495.1.patch, HIVE-17495.2.patch, 
> HIVE-17495.3.patch, HIVE-17495.4.patch, HIVE-17495.5.patch, HIVE-17495.6.patch
>
>
> 1. One sql call to retrieve column stats objects for a db
> 2. Cache some aggregate stats for speedup



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18191) Vectorization: Add validation of TableScanOperator (gather statistics) back

2017-12-12 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-18191:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Vectorization: Add validation of TableScanOperator (gather statistics) back
> ---
>
> Key: HIVE-18191
> URL: https://issues.apache.org/jira/browse/HIVE-18191
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-18191.01.patch, HIVE-18191.02.patch, 
> HIVE-18191.03.patch, HIVE-18191.04.patch, HIVE-18191.05.patch, 
> HIVE-18191.06.patch, HIVE-18191.07.patch, HIVE-18191.08.patch
>
>
> HIVE-17433 accidentally removed call to validateTableScanOperator.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-18191) Vectorization: Add validation of TableScanOperator (gather statistics) back

2017-12-12 Thread Matt McCline (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-18191:

Fix Version/s: 3.0.0

> Vectorization: Add validation of TableScanOperator (gather statistics) back
> ---
>
> Key: HIVE-18191
> URL: https://issues.apache.org/jira/browse/HIVE-18191
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HIVE-18191.01.patch, HIVE-18191.02.patch, 
> HIVE-18191.03.patch, HIVE-18191.04.patch, HIVE-18191.05.patch, 
> HIVE-18191.06.patch, HIVE-18191.07.patch, HIVE-18191.08.patch
>
>
> HIVE-17433 accidentally removed call to validateTableScanOperator.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18112) show create for view having special char in where clause is not showing properly

2017-12-12 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288070#comment-16288070
 ] 

Sankar Hariappan commented on HIVE-18112:
-

+1

Hi [~owen.omalley], 
[~nareshpr] wants to merge this patch into branch-2.2. As you are the owner of 
this branch, please let us know if there are any concerns.

> show create for view having special char in where clause is not showing 
> properly
> 
>
> Key: HIVE-18112
> URL: https://issues.apache.org/jira/browse/HIVE-18112
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Naresh P R
>Assignee: Naresh P R
>Priority: Minor
> Fix For: 2.2.0
>
> Attachments: HIVE-18112-branch-2.2.patch, 
> HIVE-18112.1-branch-2.2.patch
>
>
> e.g., 
> CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` where 
> `evil_byte1`.`a` = 'abcÖdefÖgh';
> Output:
> ==
> 0: jdbc:hive2://172.26.122.227:1> show create table v2;
> ++--+
> | createtab_stmt  
>|
> ++--+
> | CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` 
> where `evil_byte1`.`a` = 'abc�def�gh'  |
> ++--+
> Only show create output is having invalid characters, actual source table 
> content is displayed properly in the console.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HIVE-18112) show create for view having special char in where clause is not showing properly

2017-12-12 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1628#comment-1628
 ] 

Sankar Hariappan edited comment on HIVE-18112 at 12/12/17 6:52 PM:
---

[~nareshpr]

I think the similar bug is there for Tables as well and I saw it is fixed in 
apache/master. Please check it.


was (Author: sankarh):
[~nareshpr]
+1

I think the similar bug is there for Tables as well and I saw it is fixed in 
apache/master. Please check it.

> show create for view having special char in where clause is not showing 
> properly
> 
>
> Key: HIVE-18112
> URL: https://issues.apache.org/jira/browse/HIVE-18112
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Naresh P R
>Assignee: Naresh P R
>Priority: Minor
> Fix For: 2.2.0
>
> Attachments: HIVE-18112-branch-2.2.patch, 
> HIVE-18112.1-branch-2.2.patch
>
>
> e.g., 
> CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` where 
> `evil_byte1`.`a` = 'abcÖdefÖgh';
> Output:
> ==
> 0: jdbc:hive2://172.26.122.227:1> show create table v2;
> ++--+
> | createtab_stmt  
>|
> ++--+
> | CREATE VIEW `v2` AS select `evil_byte1`.`a` from `default`.`EVIL_BYTE1` 
> where `evil_byte1`.`a` = 'abc�def�gh'  |
> ++--+
> Only show create output is having invalid characters, actual source table 
> content is displayed properly in the console.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17794) HCatLoader breaks when a member is added to a struct-column of a table

2017-12-12 Thread Mithun Radhakrishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mithun Radhakrishnan updated HIVE-17794:

Attachment: HIVE-17794.02.patch

> HCatLoader breaks when a member is added to a struct-column of a table
> --
>
> Key: HIVE-17794
> URL: https://issues.apache.org/jira/browse/HIVE-17794
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 2.2.0, 3.0.0, 2.4.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Attachments: HIVE-17794.02.patch, HIVE-17794.1.patch
>
>
> When a table's schema evolves to add a new member to a struct column, Hive 
> queries work fine, but {{HCatLoader}} breaks with the following trace:
> {noformat}
> TaskAttempt 1 failed, info=
>  Error: Failure while running 
> task:org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: kite_composites_with_segments: Local Rearrange
>  tuple
> {chararray}(false) - scope-555-> scope-974 Operator Key: scope-555): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup: New For Each(false,false)
>  bag
> - scope-548 Operator Key: scope-548): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLocalRearrange.getNextTuple(POLocalRearrange.java:287)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.plan.operator.POLocalRearrangeTez.getNextTuple(POLocalRearrangeTez.java:127)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.runPipeline(PigProcessor.java:376)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.run(PigProcessor.java:241)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:362)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:179)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:171)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1679)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:171)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:167)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> Exception while executing (Name: gup: New For Each(false,false)
>  bag
> - scope-548 Operator Key: scope-548): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNextTuple(POForEach.java:252)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:305)
> ... 17 more
> Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> Exception while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> 

[jira] [Updated] (HIVE-17794) HCatLoader breaks when a member is added to a struct-column of a table

2017-12-12 Thread Mithun Radhakrishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mithun Radhakrishnan updated HIVE-17794:

Status: Patch Available  (was: Open)

> HCatLoader breaks when a member is added to a struct-column of a table
> --
>
> Key: HIVE-17794
> URL: https://issues.apache.org/jira/browse/HIVE-17794
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 2.2.0, 3.0.0, 2.4.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Attachments: HIVE-17794.02.patch, HIVE-17794.1.patch
>
>
> When a table's schema evolves to add a new member to a struct column, Hive 
> queries work fine, but {{HCatLoader}} breaks with the following trace:
> {noformat}
> TaskAttempt 1 failed, info=
>  Error: Failure while running 
> task:org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: kite_composites_with_segments: Local Rearrange
>  tuple
> {chararray}(false) - scope-555-> scope-974 Operator Key: scope-555): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup: New For Each(false,false)
>  bag
> - scope-548 Operator Key: scope-548): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLocalRearrange.getNextTuple(POLocalRearrange.java:287)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.plan.operator.POLocalRearrangeTez.getNextTuple(POLocalRearrangeTez.java:127)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.runPipeline(PigProcessor.java:376)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.run(PigProcessor.java:241)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:362)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:179)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:171)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1679)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:171)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:167)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> Exception while executing (Name: gup: New For Each(false,false)
>  bag
> - scope-548 Operator Key: scope-548): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNextTuple(POForEach.java:252)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:305)
> ... 17 more
> Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> Exception while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> 

[jira] [Updated] (HIVE-17794) HCatLoader breaks when a member is added to a struct-column of a table

2017-12-12 Thread Mithun Radhakrishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mithun Radhakrishnan updated HIVE-17794:

Attachment: (was: HIVE-17794.2-branch-2.patch)

> HCatLoader breaks when a member is added to a struct-column of a table
> --
>
> Key: HIVE-17794
> URL: https://issues.apache.org/jira/browse/HIVE-17794
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 2.2.0, 3.0.0, 2.4.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Attachments: HIVE-17794.1.patch
>
>
> When a table's schema evolves to add a new member to a struct column, Hive 
> queries work fine, but {{HCatLoader}} breaks with the following trace:
> {noformat}
> TaskAttempt 1 failed, info=
>  Error: Failure while running 
> task:org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: kite_composites_with_segments: Local Rearrange
>  tuple
> {chararray}(false) - scope-555-> scope-974 Operator Key: scope-555): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup: New For Each(false,false)
>  bag
> - scope-548 Operator Key: scope-548): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLocalRearrange.getNextTuple(POLocalRearrange.java:287)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.plan.operator.POLocalRearrangeTez.getNextTuple(POLocalRearrangeTez.java:127)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.runPipeline(PigProcessor.java:376)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.run(PigProcessor.java:241)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:362)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:179)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:171)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1679)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:171)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:167)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> Exception while executing (Name: gup: New For Each(false,false)
>  bag
> - scope-548 Operator Key: scope-548): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNextTuple(POForEach.java:252)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:305)
> ... 17 more
> Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> Exception while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> 

[jira] [Updated] (HIVE-17794) HCatLoader breaks when a member is added to a struct-column of a table

2017-12-12 Thread Mithun Radhakrishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mithun Radhakrishnan updated HIVE-17794:

Attachment: (was: HIVE-17794.2-branch-2.2.patch)

> HCatLoader breaks when a member is added to a struct-column of a table
> --
>
> Key: HIVE-17794
> URL: https://issues.apache.org/jira/browse/HIVE-17794
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 2.2.0, 3.0.0, 2.4.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Attachments: HIVE-17794.1.patch
>
>
> When a table's schema evolves to add a new member to a struct column, Hive 
> queries work fine, but {{HCatLoader}} breaks with the following trace:
> {noformat}
> TaskAttempt 1 failed, info=
>  Error: Failure while running 
> task:org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: kite_composites_with_segments: Local Rearrange
>  tuple
> {chararray}(false) - scope-555-> scope-974 Operator Key: scope-555): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup: New For Each(false,false)
>  bag
> - scope-548 Operator Key: scope-548): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLocalRearrange.getNextTuple(POLocalRearrange.java:287)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.plan.operator.POLocalRearrangeTez.getNextTuple(POLocalRearrangeTez.java:127)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.runPipeline(PigProcessor.java:376)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.run(PigProcessor.java:241)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:362)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:179)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:171)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1679)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:171)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:167)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> Exception while executing (Name: gup: New For Each(false,false)
>  bag
> - scope-548 Operator Key: scope-548): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNextTuple(POForEach.java:252)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:305)
> ... 17 more
> Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> Exception while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> 

[jira] [Updated] (HIVE-17794) HCatLoader breaks when a member is added to a struct-column of a table

2017-12-12 Thread Mithun Radhakrishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mithun Radhakrishnan updated HIVE-17794:

Attachment: (was: HIVE-17794.2.patch)

> HCatLoader breaks when a member is added to a struct-column of a table
> --
>
> Key: HIVE-17794
> URL: https://issues.apache.org/jira/browse/HIVE-17794
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 2.2.0, 3.0.0, 2.4.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Attachments: HIVE-17794.1.patch
>
>
> When a table's schema evolves to add a new member to a struct column, Hive 
> queries work fine, but {{HCatLoader}} breaks with the following trace:
> {noformat}
> TaskAttempt 1 failed, info=
>  Error: Failure while running 
> task:org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: kite_composites_with_segments: Local Rearrange
>  tuple
> {chararray}(false) - scope-555-> scope-974 Operator Key: scope-555): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup: New For Each(false,false)
>  bag
> - scope-548 Operator Key: scope-548): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLocalRearrange.getNextTuple(POLocalRearrange.java:287)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.plan.operator.POLocalRearrangeTez.getNextTuple(POLocalRearrangeTez.java:127)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.runPipeline(PigProcessor.java:376)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.run(PigProcessor.java:241)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:362)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:179)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:171)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1679)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:171)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:167)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> Exception while executing (Name: gup: New For Each(false,false)
>  bag
> - scope-548 Operator Key: scope-548): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNextTuple(POForEach.java:252)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:305)
> ... 17 more
> Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> Exception while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> 

[jira] [Commented] (HIVE-17002) decimal (binary) is not working when creating external table for hbase

2017-12-12 Thread Artur Tamazian (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288046#comment-16288046
 ] 

Artur Tamazian commented on HIVE-17002:
---

I don't have time right now, but I looked at your patch and it's almost exactly 
the same I ended up doing for our installation.
Only difference is in LazyDioHiveDecimal::init I did data field initialization 
like this:

{code}
data = new 
HiveDecimalWritable(HiveDecimal.create(Bytes.toBigDecimal(bytes.getData(), 
start, length)));
{code}

> decimal (binary) is not working when creating external table for hbase
> --
>
> Key: HIVE-17002
> URL: https://issues.apache.org/jira/browse/HIVE-17002
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 2.1.1
> Environment: HBase 1.2.0, Hive 2.1.1
>Reporter: Artur Tamazian
>Assignee: Naveen Gangam
>
> I have a table in Hbase which has a column stored using 
> Bytes.toBytes((BigDecimal) value). Hbase version is 1.2.0
> I'm creating an external table in hive to access it like this:
> {noformat}
> create external table `Users`(key int, ..., `example_column` decimal) 
> stored by 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' 
> with serdeproperties ("hbase.columns.mapping" = ":key, 
> db:example_column") 
> tblproperties("hbase.table.name" = 
> "Users","hbase.table.default.storage.type" = "binary");
> {noformat}
> Table is created without errors. After that I try running "select * from 
> users;" and see this error:
> {noformat}
> org.apache.hive.service.cli.HiveSQLException:java.io.IOException: 
> java.lang.RuntimeException: java.lang.RuntimeException: Hive Internal Error: 
> no LazyObject for 
> org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyHiveDecimalObjectInspector@1f18cebb:25:24
>   
>
> org.apache.hive.service.cli.operation.SQLOperation:getNextRowSet:SQLOperation.java:484
>   
>
> org.apache.hive.service.cli.operation.OperationManager:getOperationNextRowSet:OperationManager.java:308
>   
>
> org.apache.hive.service.cli.session.HiveSessionImpl:fetchResults:HiveSessionImpl.java:847
>   
>sun.reflect.GeneratedMethodAccessor11:invoke::-1  
>
> sun.reflect.DelegatingMethodAccessorImpl:invoke:DelegatingMethodAccessorImpl.java:43
>   
>java.lang.reflect.Method:invoke:Method.java:498  
>
> org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:78
>   
>
> org.apache.hive.service.cli.session.HiveSessionProxy:access$000:HiveSessionProxy.java:36
>   
>
> org.apache.hive.service.cli.session.HiveSessionProxy$1:run:HiveSessionProxy.java:63
>   
>java.security.AccessController:doPrivileged:AccessController.java:-2  
>javax.security.auth.Subject:doAs:Subject.java:422  
>
> org.apache.hadoop.security.UserGroupInformation:doAs:UserGroupInformation.java:1698
>   
>
> org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:59
>   
>com.sun.proxy.$Proxy33:fetchResults::-1  
>org.apache.hive.service.cli.CLIService:fetchResults:CLIService.java:504  
>
> org.apache.hive.service.cli.thrift.ThriftCLIService:FetchResults:ThriftCLIService.java:698
>   
>
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults:getResult:TCLIService.java:1717
>   
>
> org.apache.hive.service.rpc.thrift.TCLIService$Processor$FetchResults:getResult:TCLIService.java:1702
>   
>org.apache.thrift.ProcessFunction:process:ProcessFunction.java:39  
>org.apache.thrift.TBaseProcessor:process:TBaseProcessor.java:39  
>
> org.apache.hive.service.auth.TSetIpAddressProcessor:process:TSetIpAddressProcessor.java:56
>   
>
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess:run:TThreadPoolServer.java:286
>   
>
> java.util.concurrent.ThreadPoolExecutor:runWorker:ThreadPoolExecutor.java:1142
>   
>
> java.util.concurrent.ThreadPoolExecutor$Worker:run:ThreadPoolExecutor.java:617
>   
>java.lang.Thread:run:Thread.java:748  
>*java.io.IOException:java.lang.RuntimeException: 
> java.lang.RuntimeException: Hive Internal Error: no LazyObject for 
> org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyHiveDecimalObjectInspector@1f18cebb:27:2
>   
>org.apache.hadoop.hive.ql.exec.FetchTask:fetch:FetchTask.java:164  
>org.apache.hadoop.hive.ql.Driver:getResults:Driver.java:2098  
>
> org.apache.hive.service.cli.operation.SQLOperation:getNextRowSet:SQLOperation.java:479
>   
>*java.lang.RuntimeException:java.lang.RuntimeException: Hive Internal 
> Error: no LazyObject for 
> org.apache.hadoop.hive.serde2.lazy.objectinspector.primitive.LazyHiveDecimalObjectInspector@1f18cebb:43:16
>   
>
> org.apache.hadoop.hive.serde2.lazy.LazyStruct:initLazyFields:LazyStruct.java:172
>   
>

[jira] [Updated] (HIVE-17794) HCatLoader breaks when a member is added to a struct-column of a table

2017-12-12 Thread Mithun Radhakrishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mithun Radhakrishnan updated HIVE-17794:

Status: Open  (was: Patch Available)

> HCatLoader breaks when a member is added to a struct-column of a table
> --
>
> Key: HIVE-17794
> URL: https://issues.apache.org/jira/browse/HIVE-17794
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 2.2.0, 3.0.0, 2.4.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Attachments: HIVE-17794.1.patch, HIVE-17794.2-branch-2.2.patch, 
> HIVE-17794.2-branch-2.patch, HIVE-17794.2.patch
>
>
> When a table's schema evolves to add a new member to a struct column, Hive 
> queries work fine, but {{HCatLoader}} breaks with the following trace:
> {noformat}
> TaskAttempt 1 failed, info=
>  Error: Failure while running 
> task:org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: kite_composites_with_segments: Local Rearrange
>  tuple
> {chararray}(false) - scope-555-> scope-974 Operator Key: scope-555): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup: New For Each(false,false)
>  bag
> - scope-548 Operator Key: scope-548): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLocalRearrange.getNextTuple(POLocalRearrange.java:287)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.plan.operator.POLocalRearrangeTez.getNextTuple(POLocalRearrangeTez.java:127)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.runPipeline(PigProcessor.java:376)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.run(PigProcessor.java:241)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:362)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:179)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:171)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1679)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:171)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:167)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> Exception while executing (Name: gup: New For Each(false,false)
>  bag
> - scope-548 Operator Key: scope-548): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNextTuple(POForEach.java:252)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:305)
> ... 17 more
> Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> Exception while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> 

[jira] [Commented] (HIVE-18263) Ptest execution are multiple times slower sometimes due to dying executor slaves

2017-12-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288034#comment-16288034
 ] 

Hive QA commented on HIVE-18263:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
51s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
11s{color} | {color:red} The patch generated 3 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}  8m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / d6ce23d |
| Default Java | 1.8.0_111 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8198/yetus/patch-asflicense-problems.txt
 |
| modules | C: testutils/ptest2 U: testutils/ptest2 |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8198/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Ptest execution are multiple times slower sometimes due to dying executor 
> slaves
> 
>
> Key: HIVE-18263
> URL: https://issues.apache.org/jira/browse/HIVE-18263
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Reporter: Adam Szita
>Assignee: Adam Szita
> Attachments: HIVE-18263.0.patch
>
>
> PreCommit-HIVE-Build job has been seen running very long from time to time. 
> Usually it should take about 1.5 hours, but in some cases it took over 4-5 
> hours.
> Looking in the logs of one such execution I've seen that some commands that 
> were sent to test executing slaves returned 255. Here this typically means 
> that there is unknown return code for the remote call since hiveptest-server 
> can't reach these slaves anymore.
> In the hiveptest-server logs it is seen that some slaves were killed while 
> running the job normally, and here is why:
> * Hive's ptest-server checks periodically in every 60 minutes the status of 
> slaves. It also keeps track of slaves that were terminated.
> ** If upon such check it is found that a slave that was already killed 
> ([mTerminatedHosts 
> map|https://github.com/apache/hive/blob/master/testutils/ptest2/src/main/java/org/apache/hive/ptest/execution/context/CloudExecutionContextProvider.java#L93]
>  contains its IP) is still running, it will try and terminate it again.
> * The server also maintains a file on its local FS that contains the IP of 
> hosts that were used before. (This probably for resilience reasons)
> ** This file is read when tomcat server starts and if any of the IPs in the 
> file are seen as running slaves, ptest will 

[jira] [Commented] (HIVE-18208) SMB Join : Fix the unit tests to run SMB Joins.

2017-12-12 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288024#comment-16288024
 ] 

Jason Dere commented on HIVE-18208:
---

It looks like HIVE-13567 has updated a lot of the qfile outputs, can you rebase 
the patch again?

> SMB Join : Fix the unit tests to run SMB Joins.
> ---
>
> Key: HIVE-18208
> URL: https://issues.apache.org/jira/browse/HIVE-18208
> Project: Hive
>  Issue Type: Bug
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
> Attachments: HIVE-18208.1.patch, HIVE-18208.2.patch
>
>
> Most of the SMB Join tests are actually not creating SMB Joins. Need them to 
> test the intended join.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18191) Vectorization: Add validation of TableScanOperator (gather statistics) back

2017-12-12 Thread Matt McCline (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288021#comment-16288021
 ] 

Matt McCline commented on HIVE-18191:
-

Test failures appear unrelated.

> Vectorization: Add validation of TableScanOperator (gather statistics) back
> ---
>
> Key: HIVE-18191
> URL: https://issues.apache.org/jira/browse/HIVE-18191
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-18191.01.patch, HIVE-18191.02.patch, 
> HIVE-18191.03.patch, HIVE-18191.04.patch, HIVE-18191.05.patch, 
> HIVE-18191.06.patch, HIVE-18191.07.patch, HIVE-18191.08.patch
>
>
> HIVE-17433 accidentally removed call to validateTableScanOperator.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18191) Vectorization: Add validation of TableScanOperator (gather statistics) back

2017-12-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16288012#comment-16288012
 ] 

Hive QA commented on HIVE-18191:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12901657/HIVE-18191.08.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 11527 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2]
 (batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=165)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=160)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part]
 (batchId=93)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_1]
 (batchId=93)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[subquery_multi] 
(batchId=113)
org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints 
(batchId=226)
{noformat}

Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8197/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8197/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8197/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 9 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12901657 - PreCommit-HIVE-Build

> Vectorization: Add validation of TableScanOperator (gather statistics) back
> ---
>
> Key: HIVE-18191
> URL: https://issues.apache.org/jira/browse/HIVE-18191
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-18191.01.patch, HIVE-18191.02.patch, 
> HIVE-18191.03.patch, HIVE-18191.04.patch, HIVE-18191.05.patch, 
> HIVE-18191.06.patch, HIVE-18191.07.patch, HIVE-18191.08.patch
>
>
> HIVE-17433 accidentally removed call to validateTableScanOperator.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18111) Fix temp path for Spark DPP sink

2017-12-12 Thread Sahil Takiar (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287986#comment-16287986
 ] 

Sahil Takiar commented on HIVE-18111:
-

+1 assuming Hive QA failures are benign, there is also a checkstyle issue that 
should be fixed

One more question, if each DPP work outputs to 
{{QUERY_TMP_PATH/dpp_output/dppWorkId}}. And each map work reads from 
{{QUERY_TMP_PATH/dpp_output}}, what happens if there are multiple DPP sinks 
within a query with different target map works.

> Fix temp path for Spark DPP sink
> 
>
> Key: HIVE-18111
> URL: https://issues.apache.org/jira/browse/HIVE-18111
> Project: Hive
>  Issue Type: Bug
>  Components: Spark
>Reporter: Rui Li
>Assignee: Rui Li
> Attachments: HIVE-18111.1.patch, HIVE-18111.2.patch, 
> HIVE-18111.3.patch, HIVE-18111.4.patch, HIVE-18111.5.patch, HIVE-18111.5.patch
>
>
> Before HIVE-17877, each DPP sink has only one target work. The output path of 
> a DPP work is {{TMP_PATH/targetWorkId/dppWorkId}}. When we do the pruning, 
> each map work reads DPP outputs under {{TMP_PATH/targetWorkId}}.
> After HIVE-17877, each DPP sink can have multiple target works. It's possible 
> that a map work needs to read DPP outputs from multiple 
> {{TMP_PATH/targetWorkId}}. To solve this, I think we can have a DPP output 
> path specific to each query, e.g. {{QUERY_TMP_PATH/dpp_output}}. Each DPP 
> work outputs to {{QUERY_TMP_PATH/dpp_output/dppWorkId}}. And each map work 
> reads from {{QUERY_TMP_PATH/dpp_output}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HIVE-18263) Ptest execution are multiple times slower sometimes due to dying executor slaves

2017-12-12 Thread Barna Zsombor Klara (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287976#comment-16287976
 ] 

Barna Zsombor Klara edited comment on HIVE-18263 at 12/12/17 6:03 PM:
--

Thank you for the patch Adam and for the detailed analysis. Just one minor 
question:
Any reason we gather all the addresses into one set then iterate over the set 
instead of iterating over the nodes and iterating in a nested manner over the 
addresses to remove them from the failed hosts collection?
Not a big issue, I'm just curious.
Otherwise +1.

[~spena] based on the linked Jira it seems you came to a different conclusion, 
that the ips cannot clash between the killed and the live hosts. Would you 
please help in clarifying what is/can be going on here? I'm confused.


was (Author: zsombor.klara):
Thank you for the patch Adam and for the detailed analysis. Just one minor 
question:
Any reason we gather all the addresses into one set then iterate over the set 
instead of iterating over the nodes and iterating in a nested manner over the 
addresses to remove them from the failed hosts collection?
Not a big issue, I'm just curious.
Otherwise +1.

> Ptest execution are multiple times slower sometimes due to dying executor 
> slaves
> 
>
> Key: HIVE-18263
> URL: https://issues.apache.org/jira/browse/HIVE-18263
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Reporter: Adam Szita
>Assignee: Adam Szita
> Attachments: HIVE-18263.0.patch
>
>
> PreCommit-HIVE-Build job has been seen running very long from time to time. 
> Usually it should take about 1.5 hours, but in some cases it took over 4-5 
> hours.
> Looking in the logs of one such execution I've seen that some commands that 
> were sent to test executing slaves returned 255. Here this typically means 
> that there is unknown return code for the remote call since hiveptest-server 
> can't reach these slaves anymore.
> In the hiveptest-server logs it is seen that some slaves were killed while 
> running the job normally, and here is why:
> * Hive's ptest-server checks periodically in every 60 minutes the status of 
> slaves. It also keeps track of slaves that were terminated.
> ** If upon such check it is found that a slave that was already killed 
> ([mTerminatedHosts 
> map|https://github.com/apache/hive/blob/master/testutils/ptest2/src/main/java/org/apache/hive/ptest/execution/context/CloudExecutionContextProvider.java#L93]
>  contains its IP) is still running, it will try and terminate it again.
> * The server also maintains a file on its local FS that contains the IP of 
> hosts that were used before. (This probably for resilience reasons)
> ** This file is read when tomcat server starts and if any of the IPs in the 
> file are seen as running slaves, ptest will terminate these first so it can 
> begin with a fresh start
> ** The IPs of these terminated instances already make their way into 
> {{mTerminatedHosts}} upon initialization...
> * The cloud provider may reuse some older IPs, so it is not too rare that the 
> same IP that belonged to a terminated host is assigned to a new one.
> This is problematic: Hive ptest's slave caretaker thread kicks in every 60 
> minutes and might see a running host that has the same IP as an old slave had 
> which was terminated at startup. It will think that this host should be 
> terminated since it already tried 60 minutes ago as its IP is in 
> {{mTerminatedHosts}}
> We have to fix this by making sure that if a new slave is created, we check 
> the contents of {{mTerminatedHosts}} and remove this IP from it if it is 
> there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18263) Ptest execution are multiple times slower sometimes due to dying executor slaves

2017-12-12 Thread Barna Zsombor Klara (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287976#comment-16287976
 ] 

Barna Zsombor Klara commented on HIVE-18263:


Thank you for the patch Adam and for the detailed analysis. Just one minor 
question:
Any reason we gather all the addresses into one set then iterate over the set 
instead of iterating over the nodes and iterating in a nested manner over the 
addresses to remove them from the failed hosts collection?
Not a big issue, I'm just curious.
Otherwise +1.

> Ptest execution are multiple times slower sometimes due to dying executor 
> slaves
> 
>
> Key: HIVE-18263
> URL: https://issues.apache.org/jira/browse/HIVE-18263
> Project: Hive
>  Issue Type: Bug
>  Components: Testing Infrastructure
>Reporter: Adam Szita
>Assignee: Adam Szita
> Attachments: HIVE-18263.0.patch
>
>
> PreCommit-HIVE-Build job has been seen running very long from time to time. 
> Usually it should take about 1.5 hours, but in some cases it took over 4-5 
> hours.
> Looking in the logs of one such execution I've seen that some commands that 
> were sent to test executing slaves returned 255. Here this typically means 
> that there is unknown return code for the remote call since hiveptest-server 
> can't reach these slaves anymore.
> In the hiveptest-server logs it is seen that some slaves were killed while 
> running the job normally, and here is why:
> * Hive's ptest-server checks periodically in every 60 minutes the status of 
> slaves. It also keeps track of slaves that were terminated.
> ** If upon such check it is found that a slave that was already killed 
> ([mTerminatedHosts 
> map|https://github.com/apache/hive/blob/master/testutils/ptest2/src/main/java/org/apache/hive/ptest/execution/context/CloudExecutionContextProvider.java#L93]
>  contains its IP) is still running, it will try and terminate it again.
> * The server also maintains a file on its local FS that contains the IP of 
> hosts that were used before. (This probably for resilience reasons)
> ** This file is read when tomcat server starts and if any of the IPs in the 
> file are seen as running slaves, ptest will terminate these first so it can 
> begin with a fresh start
> ** The IPs of these terminated instances already make their way into 
> {{mTerminatedHosts}} upon initialization...
> * The cloud provider may reuse some older IPs, so it is not too rare that the 
> same IP that belonged to a terminated host is assigned to a new one.
> This is problematic: Hive ptest's slave caretaker thread kicks in every 60 
> minutes and might see a running host that has the same IP as an old slave had 
> which was terminated at startup. It will think that this host should be 
> terminated since it already tried 60 minutes ago as its IP is in 
> {{mTerminatedHosts}}
> We have to fix this by making sure that if a new slave is created, we check 
> the contents of {{mTerminatedHosts}} and remove this IP from it if it is 
> there.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HIVE-18191) Vectorization: Add validation of TableScanOperator (gather statistics) back

2017-12-12 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16287946#comment-16287946
 ] 

Hive QA commented on HIVE-18191:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
 0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh |
| git revision | master / d6ce23d |
| Default Java | 1.8.0_111 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-8197/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Vectorization: Add validation of TableScanOperator (gather statistics) back
> ---
>
> Key: HIVE-18191
> URL: https://issues.apache.org/jira/browse/HIVE-18191
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-18191.01.patch, HIVE-18191.02.patch, 
> HIVE-18191.03.patch, HIVE-18191.04.patch, HIVE-18191.05.patch, 
> HIVE-18191.06.patch, HIVE-18191.07.patch, HIVE-18191.08.patch
>
>
> HIVE-17433 accidentally removed call to validateTableScanOperator.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HIVE-17794) HCatLoader breaks when a member is added to a struct-column of a table

2017-12-12 Thread Mithun Radhakrishnan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mithun Radhakrishnan updated HIVE-17794:

Affects Version/s: 2.4.0

> HCatLoader breaks when a member is added to a struct-column of a table
> --
>
> Key: HIVE-17794
> URL: https://issues.apache.org/jira/browse/HIVE-17794
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 2.2.0, 3.0.0, 2.4.0
>Reporter: Mithun Radhakrishnan
>Assignee: Mithun Radhakrishnan
> Attachments: HIVE-17794.1.patch, HIVE-17794.2-branch-2.2.patch, 
> HIVE-17794.2-branch-2.patch, HIVE-17794.2.patch
>
>
> When a table's schema evolves to add a new member to a struct column, Hive 
> queries work fine, but {{HCatLoader}} breaks with the following trace:
> {noformat}
> TaskAttempt 1 failed, info=
>  Error: Failure while running 
> task:org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: kite_composites_with_segments: Local Rearrange
>  tuple
> {chararray}(false) - scope-555-> scope-974 Operator Key: scope-555): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup: New For Each(false,false)
>  bag
> - scope-548 Operator Key: scope-548): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POLocalRearrange.getNextTuple(POLocalRearrange.java:287)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.plan.operator.POLocalRearrangeTez.getNextTuple(POLocalRearrangeTez.java:127)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.runPipeline(PigProcessor.java:376)
> at 
> org.apache.pig.backend.hadoop.executionengine.tez.runtime.PigProcessor.run(PigProcessor.java:241)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:362)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:179)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:171)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1679)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:171)
> at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:167)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> Exception while executing (Name: gup: New For Each(false,false)
>  bag
> - scope-548 Operator Key: scope-548): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: Exception 
> while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.POForEach.getNextTuple(POForEach.java:252)
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:305)
> ... 17 more
> Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> Exception while executing (Name: gup_filtered: Filter
>  bag
> - scope-522 Operator Key: scope-522): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> org.apache.pig.backend.executionengine.ExecException: ERROR 6018: Error 
> converting read value to tuple
> at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:314)
> at 
> 

  1   2   >