[jira] [Commented] (HIVE-19016) Vectorization and Parquet: Disable vectorization for nested complex types

2018-06-19 Thread Teddy Choi (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517824#comment-16517824
 ] 

Teddy Choi commented on HIVE-19016:
---

LGTM +1. Pending tests.

> Vectorization and Parquet: Disable vectorization for nested complex types
> -
>
> Key: HIVE-19016
> URL: https://issues.apache.org/jira/browse/HIVE-19016
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-19016.01.patch
>
>
> Original title: Vectorization and Parquet: When vectorized, 
> parquet_nested_complex.q produces RuntimeException: Unsupported type used
>  
> Adding "SET hive.vectorized.execution.enabled=true;" to 
> parquet_nested_complex.q triggers this call stack:
> {noformat}
> Caused by: java.lang.RuntimeException: Unsupported type used in 
> list:array>
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkListColumnSupport(VectorizedParquetRecordReader.java:589)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.buildVectorizedParquetReader(VectorizedParquetRecordReader.java:525)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:440)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:401)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:353)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:92)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:360)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> {noformat}
> FYI: [~vihangk1]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19928) Load Data for managed tables should set the owner of loaded files to a configurable user

2018-06-19 Thread Deepak Jaiswal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal updated HIVE-19928:
--
Attachment: HIVE-19928.3.patch

> Load Data for managed tables should set the owner of loaded files to a 
> configurable user
> 
>
> Key: HIVE-19928
> URL: https://issues.apache.org/jira/browse/HIVE-19928
> Project: Hive
>  Issue Type: Task
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
> Attachments: HIVE-19928.1.patch, HIVE-19928.2.patch, 
> HIVE-19928.3.patch
>
>
> load data of managed tables should set the owner of the loaded files to a 
> configurable user. the default user should be hive.
> If the owner of existing file is not hive, then a rename/move operation 
> should be replaced by copy with the copied file having hive as owner.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19940) Push predicates with deterministic UDFs with RBO

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517815#comment-16517815
 ] 

Hive QA commented on HIVE-19940:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12928265/HIVE-19940.1.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 14534 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[masking_disablecbo_2] 
(batchId=46)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_deterministic_expr] 
(batchId=19)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[union_offcbo] 
(batchId=47)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[check_constraint]
 (batchId=158)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[enforce_constraint_notnull]
 (batchId=158)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_in]
 (batchId=170)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_notin]
 (batchId=171)
org.apache.hadoop.hive.cli.TestTezPerfCliDriver.testCliDriver[query8] 
(batchId=257)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/11937/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11937/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11937/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 8 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12928265 - PreCommit-HIVE-Build

> Push predicates with deterministic UDFs with RBO
> 
>
> Key: HIVE-19940
> URL: https://issues.apache.org/jira/browse/HIVE-19940
> Project: Hive
>  Issue Type: Improvement
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-19940.1.patch
>
>
> With RBO, predicates with any UDF doesn't get pushed down.  It makes sense to 
> not pushdown the predicates with non-deterministic function as the meaning of 
> the query changes after the predicate is resolved to use the function.  But 
> pushing a deterministic function is beneficial.
> Test Case:
> {code}
> set hive.cbo.enable=false;
> CREATE TABLE `testb`(
>`cola` string COMMENT '',
>`colb` string COMMENT '',
>`colc` string COMMENT '')
> PARTITIONED BY (
>`part1` string,
>`part2` string,
>`part3` string)
> STORED AS AVRO;
> CREATE TABLE `testa`(
>`col1` string COMMENT '',
>`col2` string COMMENT '',
>`col3` string COMMENT '',
>`col4` string COMMENT '',
>`col5` string COMMENT '')
> PARTITIONED BY (
>`part1` string,
>`part2` string,
>`part3` string)
> STORED AS AVRO;
> insert into testA partition (part1='US', part2='ABC', part3='123')
> values ('12.34', '100', '200', '300', 'abc'),
> ('12.341', '1001', '2001', '3001', 'abcd');
> insert into testA partition (part1='UK', part2='DEF', part3='123')
> values ('12.34', '100', '200', '300', 'abc'),
> ('12.341', '1001', '2001', '3001', 'abcd');
> insert into testA partition (part1='US', part2='DEF', part3='200')
> values ('12.34', '100', '200', '300', 'abc'),
> ('12.341', '1001', '2001', '3001', 'abcd');
> insert into testA partition (part1='CA', part2='ABC', part3='300')
> values ('12.34', '100', '200', '300', 'abc'),
> ('12.341', '1001', '2001', '3001', 'abcd');
> insert into testB partition (part1='CA', part2='ABC', part3='300')
> values ('600', '700', 'abc'), ('601', '701', 'abcd');
> insert into testB partition (part1='CA', part2='ABC', part3='400')
> values ( '600', '700', 'abc'), ( '601', '701', 'abcd');
> insert into testB partition (part1='UK', part2='PQR', part3='500')
> values ('600', '700', 'abc'), ('601', '701', 'abcd');
> insert into testB partition (part1='US', part2='DEF', part3='200')
> values ( '600', '700', 'abc'), ('601', '701', 'abcd');
> insert into testB partition (part1='US', part2='PQR', part3='123')
> values ( '600', '700', 'abc'), ('601', '701', 'abcd');
> -- views with deterministic functions
> create view viewDeterministicUDFA partitioned on (vpart1, vpart2, vpart3) as 
> select
>  cast(col1 as decimal(38,18)) as vcol1,
>  cast(col2 as decimal(38,18)) as vcol2,
>  cast(col3 as decimal(38,18)) as vcol3,
>  cast(col4 as decimal(38,18)) as vcol4,
>  cast(col5 as char(10)) as vcol5,
>  cast(part1 as char(2)) as vpart1,
>  cast(part2 as char(3)) as vpart2,
>  cast(part3 as char(3)) as vpart3
>  

[jira] [Updated] (HIVE-19920) Schematool fails in embedded mode when auth is on

2018-06-19 Thread Daniel Dai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated HIVE-19920:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 4.0.0
   3.1.0
   Status: Resolved  (was: Patch Available)

Patch pushed to branch-3/master.

> Schematool fails in embedded mode when auth is on
> -
>
> Key: HIVE-19920
> URL: https://issues.apache.org/jira/browse/HIVE-19920
> Project: Hive
>  Issue Type: Bug
>  Components: Standalone Metastore
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Fix For: 3.1.0, 4.0.0
>
> Attachments: HIVE-19920.1.patch, HIVE-19920.2.patch, 
> HIVE-19920.3.patch
>
>
> This is a follow up of HIVE-19775. We need to override more properties in 
> embedded hs2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19922) TestMiniDruidKafkaCliDriver[druidkafkamini_basic] is flaky

2018-06-19 Thread Peter Vary (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517809#comment-16517809
 ] 

Peter Vary commented on HIVE-19922:
---

Thanks for taking a look [~ashutoshc], [~nishantbangarwa]. Seen failures again 
with this error:
{code}
Error Message
Client Execution succeeded but contained differences (error code = 1) after 
executing druidkafkamini_basic.q 
165a166,175
> Cherno Alpha
> Cherno Alpha
> Coyote Tango
> Coyote Tango
> Crimson Typhoon
> Crimson Typhoon
> Gypsy Danger
> Gypsy Danger
> Striker Eureka
> Striker Eureka
{code}
https://builds.apache.org/job/PreCommit-HIVE-Build/11935/testReport/org.apache.hadoop.hive.cli/TestMiniDruidKafkaCliDriver/testCliDriver_druidkafkamini_basic_/
https://builds.apache.org/job/PreCommit-HIVE-Build/11902/testReport/org.apache.hadoop.hive.cli/TestMiniDruidKafkaCliDriver/testCliDriver_druidkafkamini_basic_/


> TestMiniDruidKafkaCliDriver[druidkafkamini_basic] is flaky
> --
>
> Key: HIVE-19922
> URL: https://issues.apache.org/jira/browse/HIVE-19922
> Project: Hive
>  Issue Type: Bug
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-19922.2.patch, HIVE-19922.3.patch, HIVE-19922.patch
>
>
> Consistently failing in the last 4 runs.
> See:
> [https://builds.apache.org/job/PreCommit-HIVE-Build/11824/testReport/org.apache.hadoop.hive.cli/TestMiniDruidKafkaCliDriver/testCliDriver_druidkafkamini_basic_/history/]
> Can not reproduce the failure locally :(
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19940) Push predicates with deterministic UDFs with RBO

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517798#comment-16517798
 ] 

Hive QA commented on HIVE-19940:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
1s{color} | {color:blue} ql in master has 2280 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-11937/dev-support/hive-personality.sh
 |
| git revision | master / 2394e40 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-11937/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Push predicates with deterministic UDFs with RBO
> 
>
> Key: HIVE-19940
> URL: https://issues.apache.org/jira/browse/HIVE-19940
> Project: Hive
>  Issue Type: Improvement
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-19940.1.patch
>
>
> With RBO, predicates with any UDF doesn't get pushed down.  It makes sense to 
> not pushdown the predicates with non-deterministic function as the meaning of 
> the query changes after the predicate is resolved to use the function.  But 
> pushing a deterministic function is beneficial.
> Test Case:
> {code}
> set hive.cbo.enable=false;
> CREATE TABLE `testb`(
>`cola` string COMMENT '',
>`colb` string COMMENT '',
>`colc` string COMMENT '')
> PARTITIONED BY (
>`part1` string,
>`part2` string,
>`part3` string)
> STORED AS AVRO;
> CREATE TABLE `testa`(
>`col1` string COMMENT '',
>`col2` string COMMENT '',
>`col3` string COMMENT '',
>`col4` string COMMENT '',
>`col5` string COMMENT '')
> PARTITIONED BY (
>`part1` string,
>`part2` string,
>`part3` string)
> STORED AS AVRO;
> insert into testA partition (part1='US', part2='ABC', part3='123')
> values ('12.34', '100', '200', '300', 'abc'),
> ('12.341', '1001', '2001', '3001', 'abcd');
> insert into testA partition (part1='UK', part2='DEF', part3='123')
> values ('12.34', '100', '200', '300', 'abc'),
> ('12.341', '1001', '2001', '3001', 'abcd');
> insert into testA partition (part1='US', part2='DEF', part3='200')
> values ('12.34', '100', '200', '300', 'abc'),
> ('12.341', 

[jira] [Commented] (HIVE-19928) Load Data for managed tables should set the owner of loaded files to a configurable user

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517784#comment-16517784
 ] 

Hive QA commented on HIVE-19928:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12928283/HIVE-19928.2.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/11936/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11936/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11936/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-06-20 04:17:02.003
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-11936/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-06-20 04:17:02.007
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 2394e40 HIVE-19870: HCatalog dynamic partition query can fail, 
if the table path is managed by Sentry (Peter Vary via Marta Kuczora)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 2394e40 HIVE-19870: HCatalog dynamic partition query can fail, 
if the table path is managed by Sentry (Peter Vary via Marta Kuczora)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-06-20 04:17:03.365
+ rm -rf ../yetus_PreCommit-HIVE-Build-11936
+ mkdir ../yetus_PreCommit-HIVE-Build-11936
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-11936
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-11936/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: a/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java: does not 
exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/exec/MoveTask.java: does not 
exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java: does not 
exist in index
error: a/ql/src/test/queries/clientpositive/bucket_map_join_tez2.q: does not 
exist in index
Going to apply patch with: git apply -p1
+ [[ maven == \m\a\v\e\n ]]
+ rm -rf /data/hiveptest/working/maven/org/apache/hive
+ mvn -B clean install -DskipTests -T 4 -q 
-Dmaven.repo.local=/data/hiveptest/working/maven
protoc-jar: executing: [/tmp/protoc7664412832904980413.exe, --version]
libprotoc 2.5.0
protoc-jar: executing: [/tmp/protoc7664412832904980413.exe, 
-I/data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/protobuf/org/apache/hadoop/hive/metastore,
 
--java_out=/data/hiveptest/working/apache-github-source-source/standalone-metastore/target/generated-sources,
 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/protobuf/org/apache/hadoop/hive/metastore/metastore.proto]
ANTLR Parser Generator  Version 3.5.2
Output file 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/target/generated-sources/org/apache/hadoop/hive/metastore/parser/FilterParser.java
 does not exist: must build 
/data/hiveptest/working/apache-github-source-source/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/parser/Filter.g
org/apache/hadoop/hive/metastore/parser/Filter.g
log4j:WARN No appenders could be found for logger (DataNucleus.Persistence).
log4j:WARN Please initialize the log4j system properly.
DataNucleus Enhancer (version 4.1.17) for API "JDO"
DataNucleus Enhancer completed with success for 40 classes.
ANTLR Parser Generator  Version 3.5.2
Output file 

[jira] [Commented] (HIVE-19897) Add more tests for parallel compilation

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517781#comment-16517781
 ] 

Hive QA commented on HIVE-19897:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12928261/HIVE-19897.3.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14535 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestMiniDruidKafkaCliDriver.testCliDriver[druidkafkamini_basic]
 (batchId=257)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/11935/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11935/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11935/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12928261 - PreCommit-HIVE-Build

> Add more tests for parallel compilation 
> 
>
> Key: HIVE-19897
> URL: https://issues.apache.org/jira/browse/HIVE-19897
> Project: Hive
>  Issue Type: Test
>  Components: HiveServer2
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
>Priority: Minor
> Attachments: HIVE-19897.1.patch, HIVE-19897.3.patch
>
>
> The two parallel compilation tests in 
> org.apache.hive.jdbc.TestJdbcWithMiniHS2 do not real cover the case of 
> queries compile concurrently from different connections. No sure it is on 
> purpose or by mistake. Add more tests to cover the case. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19016) Vectorization and Parquet: Disable vectorization for nested complex types

2018-06-19 Thread Vihang Karajgaonkar (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517769#comment-16517769
 ] 

Vihang Karajgaonkar commented on HIVE-19016:


Thanks [~mmccline] for the patch. I was wondering if it makes sense to make the 
patch more generic so that any file format can expose the types which are not 
supported for vectorization. For instance define a method in the 
{{VectorizedInputFormatInterface}} to return {{true}} or {{false}} when given a 
list of TypeInfos depending on whether the types are supported or not.

+1 (pending tests)

> Vectorization and Parquet: Disable vectorization for nested complex types
> -
>
> Key: HIVE-19016
> URL: https://issues.apache.org/jira/browse/HIVE-19016
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-19016.01.patch
>
>
> Original title: Vectorization and Parquet: When vectorized, 
> parquet_nested_complex.q produces RuntimeException: Unsupported type used
>  
> Adding "SET hive.vectorized.execution.enabled=true;" to 
> parquet_nested_complex.q triggers this call stack:
> {noformat}
> Caused by: java.lang.RuntimeException: Unsupported type used in 
> list:array>
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkListColumnSupport(VectorizedParquetRecordReader.java:589)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.buildVectorizedParquetReader(VectorizedParquetRecordReader.java:525)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:440)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:401)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:353)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:92)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:360)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> {noformat}
> FYI: [~vihangk1]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19897) Add more tests for parallel compilation

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517748#comment-16517748
 ] 

Hive QA commented on HIVE-19897:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
40s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
23s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} itests/hive-unit: The patch generated 0 new + 46 
unchanged - 2 fixed = 46 total (was 48) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 13m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-11935/dev-support/hive-personality.sh
 |
| git revision | master / 2394e40 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: itests/hive-unit U: itests/hive-unit |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-11935/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Add more tests for parallel compilation 
> 
>
> Key: HIVE-19897
> URL: https://issues.apache.org/jira/browse/HIVE-19897
> Project: Hive
>  Issue Type: Test
>  Components: HiveServer2
>Reporter: Yongzhi Chen
>Assignee: Yongzhi Chen
>Priority: Minor
> Attachments: HIVE-19897.1.patch, HIVE-19897.3.patch
>
>
> The two parallel compilation tests in 
> org.apache.hive.jdbc.TestJdbcWithMiniHS2 do not real cover the case of 
> queries compile concurrently from different connections. No sure it is on 
> purpose or by mistake. Add more tests to cover the case. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19889) Wrong results due to PPD of non deterministic functions with CBO

2018-06-19 Thread Naveen Gangam (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517747#comment-16517747
 ] 

Naveen Gangam commented on HIVE-19889:
--

Looking at the new query plans for the query, the fix makes sense to me. So +1 
for me pending the review of the failed test. Thanks [~janulatha] for providing 
a patch for this.

> Wrong results due to PPD of non deterministic functions with CBO
> 
>
> Key: HIVE-19889
> URL: https://issues.apache.org/jira/browse/HIVE-19889
> Project: Hive
>  Issue Type: Bug
>Reporter: Janaki Lahorani
>Assignee: Janaki Lahorani
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-19889.1.patch, HIVE-19889.2.patch
>
>
> The following query can give wrong results when CBO is on:
> {code}
> select * from (
> select part1,randum123
> from (SELECT *, cast(rand() as double) AS randum123 FROM testA where 
> part1='CA' and part2 = 'ABC') a
> where randum123 <= 0.5) s where s.randum123 > 0.25 limit 20;
> The plan of the query is as follows:
> STAGE PLANS:
>   Stage: Stage-1
> Map Reduce
>   Map Operator Tree:
>   TableScan
> alias: testa
> Statistics: Num rows: 2 Data size: 4580 Basic stats: COMPLETE 
> Column stats: NONE
> Filter Operator
>   predicate: ((rand() <= 0.5D) and (rand() > 0.25D)) (type: 
> boolean)
>   Statistics: Num rows: 1 Data size: 2290 Basic stats: COMPLETE 
> Column stats: NONE
>   Select Operator
> expressions: 'CA' (type: string), rand() (type: double)
> outputColumnNames: _col0, _col1
> Statistics: Num rows: 1 Data size: 2290 Basic stats: COMPLETE 
> Column stats: NONE
> Limit
>   Number of rows: 20
>   Statistics: Num rows: 1 Data size: 2290 Basic stats: 
> COMPLETE Column stats: NONE
>   File Output Operator
> compressed: false
> Statistics: Num rows: 1 Data size: 2290 Basic stats: 
> COMPLETE Column stats: NONE
> table:
> input format: 
> org.apache.hadoop.mapred.SequenceFileInputFormat
> output format: 
> org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat
> serde: 
> org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe
>   Stage: Stage-0
> Fetch Operator
>   limit: 20
>   Processor Tree:
> ListSink
> {code}
> The relevant part in the plan is the filter:
> {code}
> Filter Operator
>   predicate: ((rand() <= 0.5D) and (rand() > 0.25D)) (type: 
> boolean)
> {code}
> The predicates randum123 <= 0.5 and s.randum123 > 0.25 were pushed down.  And 
> randum123 was resolved to rand().  This is bad because it will result in 
> invocation of rand() two times and rand() UDF is non-deterministic.  Both the 
> rand calls can generate values that can satisfy the predicates independently, 
> but not together, whereas the original intention of the query is to give 
> results when rand falls between 0.25 and 0.5.
> A sample result:
> {code}
> CA0.9191984370369802
> CA0.397933021566812
> {code}
> where the condition was not satisfied.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19936) explain on a query failing in secure cluster whereas query itself works

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517713#comment-16517713
 ] 

Hive QA commented on HIVE-19936:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
21s{color} | {color:blue} ql in master has 2280 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-11934/dev-support/hive-personality.sh
 |
| git revision | master / 2394e40 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-11934/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> explain on a query failing in secure cluster whereas query itself works
> ---
>
> Key: HIVE-19936
> URL: https://issues.apache.org/jira/browse/HIVE-19936
> Project: Hive
>  Issue Type: Bug
>  Components: Hooks
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Attachments: HIVE-19936.1.patch
>
>
> On a secured cluster with Sentry integrated run the following queries
> {noformat}
> create table foobar (id int) partitioned by (val int);
> explain alter table foobar add partition (val=50);
> {noformat}
> The explain query will fail with the following exception while the query 
> itself works with no issue.
> Error while compiling statement: FAILED: SemanticException No valid 
> privileges{color}
>  Required privilege( Table) not available in output privileges
>  The required privileges: (state=42000,code=4)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19821) Distributed HiveServer2

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517685#comment-16517685
 ] 

Hive QA commented on HIVE-19821:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12928256/HIVE-19821.2.WIP.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/11933/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11933/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11933/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Tests exited with: Exception: Patch URL 
https://issues.apache.org/jira/secure/attachment/12928256/HIVE-19821.2.WIP.patch
 was found in seen patch url's cache and a test was probably run already on it. 
Aborting...
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12928256 - PreCommit-HIVE-Build

> Distributed HiveServer2
> ---
>
> Key: HIVE-19821
> URL: https://issues.apache.org/jira/browse/HIVE-19821
> Project: Hive
>  Issue Type: New Feature
>  Components: HiveServer2
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-19821.1.WIP.patch, HIVE-19821.2.WIP.patch, 
> HIVE-19821_ Distributed HiveServer2.pdf
>
>
> HS2 deployments often hit OOM issues due to a number of factors: (1) too many 
> concurrent connections, (2) query that scan a large number of partitions have 
> to pull a lot of metadata into memory (e.g. a query reading thousands of 
> partitions requires loading thousands of partitions into memory), (3) very 
> large queries can take up a lot of heap space, especially during query 
> parsing. There are a number of other factors that cause HiveServer2 to run 
> out of memory, these are just some of the more commons ones.
> Distributed HS2 proposes to do all query parsing, compilation, planning, and 
> execution coordination inside a dedicated container. This should 
> significantly decrease memory pressure on HS2 and allow HS2 to scale to a 
> larger number of concurrent users.
> For HoS (and I think Hive-on-Tez) this just requires moving all query 
> compilation, planning, etc. inside the application master for the 
> corresponding Hive session.
> The main benefit here is isolation. A poorly written Hive query cannot bring 
> down an entire HiveServer2 instance and force all other queries to fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19937) Intern JobConf objects in Spark tasks

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517684#comment-16517684
 ] 

Hive QA commented on HIVE-19937:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12928254/HIVE-19937.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14533 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/11932/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11932/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11932/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12928254 - PreCommit-HIVE-Build

> Intern JobConf objects in Spark tasks
> -
>
> Key: HIVE-19937
> URL: https://issues.apache.org/jira/browse/HIVE-19937
> Project: Hive
>  Issue Type: Improvement
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-19937.1.patch
>
>
> When fixing HIVE-16395, we decided that each new Spark task should clone the 
> {{JobConf}} object to prevent any {{ConcurrentModificationException}} from 
> being thrown. However, setting this variable comes at a cost of storing a 
> duplicate {{JobConf}} object for each Spark task. These objects can take up a 
> significant amount of memory, we should intern them so that Spark tasks 
> running in the same JVM don't store duplicate copies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19943) Header values keep showing up in result sets

2018-06-19 Thread shaoxiaowei (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517680#comment-16517680
 ] 

shaoxiaowei commented on HIVE-19943:


I didn't find 2.1.0. Can you tell me where you downloaded it? Or 2.1.0 is not 
the official version, then you download it, if not the official version, or try 
not to use it.

> Header values keep showing up in result sets
> 
>
> Key: HIVE-19943
> URL: https://issues.apache.org/jira/browse/HIVE-19943
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 2.1.0
> Environment: Hdinsight Hive interactivequerry
> [Components|https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-component-versioning#hadoop-components-available-with-different-hdinsight-versions]
>Reporter: Liam De Lee
>Assignee: shaoxiaowei
>Priority: Major
>
> We are using the tblproperties ("skip.header.line.count"="1") when creating 
> an external table.
> When we do a select * from table we get it back as expected without the 
> header present in the result set.
> However when we do for instance a count(1) we get the header back in this 
> count (tested with a select * from table and paste it in notepad to find the 
> amount of rows)
> If we also do this with a select distinct(column) from table we also get the 
> header as a distinct value.
> file structure:
> ||_TESTING_TYPE||
> |adf|
> |hyg|
> |abc|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19943) Header values keep showing up in result sets

2018-06-19 Thread shaoxiaowei (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517676#comment-16517676
 ] 

shaoxiaowei commented on HIVE-19943:


I didn't find this bug,but  my version is hive-1.1.0 on cdh5.11.2,I'll use it 
later 2.1.0 version test.

> Header values keep showing up in result sets
> 
>
> Key: HIVE-19943
> URL: https://issues.apache.org/jira/browse/HIVE-19943
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 2.1.0
> Environment: Hdinsight Hive interactivequerry
> [Components|https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-component-versioning#hadoop-components-available-with-different-hdinsight-versions]
>Reporter: Liam De Lee
>Assignee: shaoxiaowei
>Priority: Major
>
> We are using the tblproperties ("skip.header.line.count"="1") when creating 
> an external table.
> When we do a select * from table we get it back as expected without the 
> header present in the result set.
> However when we do for instance a count(1) we get the header back in this 
> count (tested with a select * from table and paste it in notepad to find the 
> amount of rows)
> If we also do this with a select distinct(column) from table we also get the 
> header as a distinct value.
> file structure:
> ||_TESTING_TYPE||
> |adf|
> |hyg|
> |abc|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work stopped] (HIVE-19943) Header values keep showing up in result sets

2018-06-19 Thread shaoxiaowei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-19943 stopped by shaoxiaowei.
--
> Header values keep showing up in result sets
> 
>
> Key: HIVE-19943
> URL: https://issues.apache.org/jira/browse/HIVE-19943
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 2.1.0
> Environment: Hdinsight Hive interactivequerry
> [Components|https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-component-versioning#hadoop-components-available-with-different-hdinsight-versions]
>Reporter: Liam De Lee
>Assignee: shaoxiaowei
>Priority: Major
>
> We are using the tblproperties ("skip.header.line.count"="1") when creating 
> an external table.
> When we do a select * from table we get it back as expected without the 
> header present in the result set.
> However when we do for instance a count(1) we get the header back in this 
> count (tested with a select * from table and paste it in notepad to find the 
> amount of rows)
> If we also do this with a select distinct(column) from table we also get the 
> header as a distinct value.
> file structure:
> ||_TESTING_TYPE||
> |adf|
> |hyg|
> |abc|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HIVE-19943) Header values keep showing up in result sets

2018-06-19 Thread shaoxiaowei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-19943 started by shaoxiaowei.
--
> Header values keep showing up in result sets
> 
>
> Key: HIVE-19943
> URL: https://issues.apache.org/jira/browse/HIVE-19943
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 2.1.0
> Environment: Hdinsight Hive interactivequerry
> [Components|https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-component-versioning#hadoop-components-available-with-different-hdinsight-versions]
>Reporter: Liam De Lee
>Assignee: shaoxiaowei
>Priority: Major
>
> We are using the tblproperties ("skip.header.line.count"="1") when creating 
> an external table.
> When we do a select * from table we get it back as expected without the 
> header present in the result set.
> However when we do for instance a count(1) we get the header back in this 
> count (tested with a select * from table and paste it in notepad to find the 
> amount of rows)
> If we also do this with a select distinct(column) from table we also get the 
> header as a distinct value.
> file structure:
> ||_TESTING_TYPE||
> |adf|
> |hyg|
> |abc|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19943) Header values keep showing up in result sets

2018-06-19 Thread shaoxiaowei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shaoxiaowei reassigned HIVE-19943:
--

Assignee: shaoxiaowei

> Header values keep showing up in result sets
> 
>
> Key: HIVE-19943
> URL: https://issues.apache.org/jira/browse/HIVE-19943
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 2.1.0
> Environment: Hdinsight Hive interactivequerry
> [Components|https://docs.microsoft.com/en-us/azure/hdinsight/hdinsight-component-versioning#hadoop-components-available-with-different-hdinsight-versions]
>Reporter: Liam De Lee
>Assignee: shaoxiaowei
>Priority: Major
>
> We are using the tblproperties ("skip.header.line.count"="1") when creating 
> an external table.
> When we do a select * from table we get it back as expected without the 
> header present in the result set.
> However when we do for instance a count(1) we get the header back in this 
> count (tested with a select * from table and paste it in notepad to find the 
> amount of rows)
> If we also do this with a select distinct(column) from table we also get the 
> header as a distinct value.
> file structure:
> ||_TESTING_TYPE||
> |adf|
> |hyg|
> |abc|



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19937) Intern JobConf objects in Spark tasks

2018-06-19 Thread Xuefu Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517674#comment-16517674
 ] 

Xuefu Zhang commented on HIVE-19937:


+1

> Intern JobConf objects in Spark tasks
> -
>
> Key: HIVE-19937
> URL: https://issues.apache.org/jira/browse/HIVE-19937
> Project: Hive
>  Issue Type: Improvement
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-19937.1.patch
>
>
> When fixing HIVE-16395, we decided that each new Spark task should clone the 
> {{JobConf}} object to prevent any {{ConcurrentModificationException}} from 
> being thrown. However, setting this variable comes at a cost of storing a 
> duplicate {{JobConf}} object for each Spark task. These objects can take up a 
> significant amount of memory, we should intern them so that Spark tasks 
> running in the same JVM don't store duplicate copies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19937) Intern JobConf objects in Spark tasks

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517671#comment-16517671
 ] 

Hive QA commented on HIVE-19937:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
1s{color} | {color:blue} ql in master has 2280 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-11932/dev-support/hive-personality.sh
 |
| git revision | master / 2394e40 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-11932/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Intern JobConf objects in Spark tasks
> -
>
> Key: HIVE-19937
> URL: https://issues.apache.org/jira/browse/HIVE-19937
> Project: Hive
>  Issue Type: Improvement
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-19937.1.patch
>
>
> When fixing HIVE-16395, we decided that each new Spark task should clone the 
> {{JobConf}} object to prevent any {{ConcurrentModificationException}} from 
> being thrown. However, setting this variable comes at a cost of storing a 
> duplicate {{JobConf}} object for each Spark task. These objects can take up a 
> significant amount of memory, we should intern them so that Spark tasks 
> running in the same JVM don't store duplicate copies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19176) Add HoS support to progress bar on Beeline client

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517654#comment-16517654
 ] 

Hive QA commented on HIVE-19176:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12928255/HIVE-19176.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 14533 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.ql.exec.spark.TestSparkTask.testRemoteSparkCancel 
(batchId=299)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/11931/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11931/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11931/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 1 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12928255 - PreCommit-HIVE-Build

> Add HoS support to progress bar on Beeline client
> -
>
> Key: HIVE-19176
> URL: https://issues.apache.org/jira/browse/HIVE-19176
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-19176.1.patch
>
>
> Make whats was done in HIVE-15473 work for HoS.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19902) Provide Metastore micro-benchmarks

2018-06-19 Thread Alexander Kolbasov (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517642#comment-16517642
 ] 

Alexander Kolbasov commented on HIVE-19902:
---

[~alangates] I was able to get around the issue of the child pom inheriting too 
much from the parent by dropping the  spec from the child. This fixed 
the child issues, but it turns out that standalone-metastore doesn't like when 
it is converted from 'jar' to 'pom' packaging - it starts complaining about 
{{ConfTemplatePrinter}} class not found.

I think when HIVE-17751 is in place (which will refactor standalone-metastore 
into {common,server,client} parts, it would work but as of now we can't add 
submodules to standalone metastore.

I'd rather avoid a direct dependency between this jira and HIVE-17751 and for 
now add metastore benchmarks directly under hive and then move it later when 
HIVE-17751 is fixed. What do you think? [~owen.omalley], [~vihangk1], [~pvary] 
do you have any opinion?

> Provide Metastore micro-benchmarks
> --
>
> Key: HIVE-19902
> URL: https://issues.apache.org/jira/browse/HIVE-19902
> Project: Hive
>  Issue Type: Improvement
>  Components: Standalone Metastore
>Affects Versions: 3.1.0, 4.0.0
>Reporter: Alexander Kolbasov
>Assignee: Alexander Kolbasov
>Priority: Major
>
> It would be very useful to have metastore benchmarks to be able to track perf 
> issues.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17751) Separate HMS Client and HMS server into separate sub-modules

2018-06-19 Thread Alexander Kolbasov (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517633#comment-16517633
 ] 

Alexander Kolbasov commented on HIVE-17751:
---

There is a new complication that was added recently.

HiveMetastoreClient now has this bit of code which is only used in embedded 
mode:

{code}
  MaterializationsInvalidationCache.get().init(conf, (IHMSHandler) client);
{code}

Both {{MaterializationsInvalidationCache}} and {{IHMSHandler}} do not belong to 
either client or common code.

This was added as part of 
HIVE-18776: MaterializationsInvalidationCache loading causes race condition in 
the metastore (Jesus Camacho Rodriguez, reviewed by Alan Gates)

[~alangates] [~jcamachorodriguez] any thoughts how to handle this case for 
standalone client?

I am using reflection to initialize standalone client, but in this case it 
seems rather tricky.


> Separate HMS Client and HMS server into separate sub-modules
> 
>
> Key: HIVE-17751
> URL: https://issues.apache.org/jira/browse/HIVE-17751
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Vihang Karajgaonkar
>Assignee: Alexander Kolbasov
>Priority: Major
> Attachments: HIVE-17751.06-standalone-metastore.patch
>
>
> external applications which are interfacing with HMS should ideally only 
> include HMSClient library instead of one big library containing server as 
> well. We should ideally have a thin client library so that cross version 
> support for external applications is easier. We should sub-divide the 
> standalone module into possibly 3 modules (one for common classes, one for 
> client classes and one for server) or 2 sub-modules (one for client and one 
> for server) so that we can generate separate jars for HMS client and server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19176) Add HoS support to progress bar on Beeline client

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517632#comment-16517632
 ] 

Hive QA commented on HIVE-19176:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
18s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
32s{color} | {color:blue} common in master has 62 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
29s{color} | {color:blue} jdbc in master has 17 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
3s{color} | {color:blue} ql in master has 2280 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
37s{color} | {color:blue} service in master has 48 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
39s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
58s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
39s{color} | {color:red} ql: The patch generated 1 new + 13 unchanged - 2 fixed 
= 14 total (was 15) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} service: The patch generated 3 new + 31 unchanged - 0 
fixed = 34 total (was 31) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
21s{color} | {color:red} ql generated 2 new + 2279 unchanged - 1 fixed = 2281 
total (was 2280) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
13s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 34m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Unread field:field be static?  At RenderStrategy.java:[line 47] |
|  |  
org.apache.hadoop.hive.ql.exec.spark.status.RenderStrategy$BaseUpdateFunction.isSameAsPreviousProgress(Map,
 Map) makes inefficient use of keySet iterator instead of entrySet iterator  At 
RenderStrategy.java:of keySet iterator instead of entrySet iterator  At 
RenderStrategy.java:[line 147] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-11931/dev-support/hive-personality.sh
 |
| git revision | master / 2394e40 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-11931/yetus/diff-checkstyle-ql.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-11931/yetus/diff-checkstyle-service.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-11931/yetus/new-findbugs-ql.html
 |
| asflicense | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-11931/yetus/patch-asflicense-problems.txt
 |
| modules | C: common jdbc ql service U: . |

[jira] [Assigned] (HIVE-19948) HiveCli is not splitting the command by semicolon properly if quotes are inside the string

2018-06-19 Thread Aihua Xu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aihua Xu reassigned HIVE-19948:
---

Assignee: Aihua Xu

> HiveCli is not splitting the command by semicolon properly if quotes are 
> inside the string 
> ---
>
> Key: HIVE-19948
> URL: https://issues.apache.org/jira/browse/HIVE-19948
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 2.2.0
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
>
> HIVE-15297 tries to split the command by considering semicolon inside string, 
> but it doesn't consider the case that quotes can also be inside string. 
> For the following command {{insert into escape1 partition (ds='1', part='3') 
> values ("abc' ");}}, it will fail with 
> {noformat}
> 18/06/19 16:37:05 ERROR ql.Driver: FAILED: ParseException line 1:64 
> extraneous input ';' expecting EOF near ''
> org.apache.hadoop.hive.ql.parse.ParseException: line 1:64 extraneous input 
> ';' expecting EOF near ''
>   at 
> org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:220)
>   at org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:74)
>   at org.apache.hadoop.hive.ql.parse.ParseUtils.parse(ParseUtils.java:67)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:606)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1686)
>   at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1633)
>   at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1628)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126)
>   at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:214)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:821)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19938) Upgrade scripts for information schema

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517617#comment-16517617
 ] 

Hive QA commented on HIVE-19938:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12928257/HIVE-19938.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 14514 tests 
executed
*Failed tests:*
{noformat}
TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=257)
TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=257)
TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=257)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/11930/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11930/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11930/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 3 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12928257 - PreCommit-HIVE-Build

> Upgrade scripts for information schema
> --
>
> Key: HIVE-19938
> URL: https://issues.apache.org/jira/browse/HIVE-19938
> Project: Hive
>  Issue Type: Bug
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-19938.1.patch
>
>
> To make schematool -upgradeSchema work for information schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19942) Hive Notification: All events for indexes should have table name

2018-06-19 Thread Vihang Karajgaonkar (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517615#comment-16517615
 ] 

Vihang Karajgaonkar commented on HIVE-19942:


Thanks [~bharos92] for the patch. The change itself looks good to me. Can you 
please modify tests in {{TestDbNotificationListener}} to add assert that 
tbl_name is not null? 

> Hive Notification: All events for indexes should have table name
> 
>
> Key: HIVE-19942
> URL: https://issues.apache.org/jira/browse/HIVE-19942
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive
>Affects Versions: 2.3.2
>Reporter: Bharathkrishna Guruvayoor Murali
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-19942.1.patch
>
>
> All the events for indexes: Create Index, Alter Index, Drop Index have the 
> TBL_NAME as null. The TBL_NAME should be populated with the table on which 
> the index is created.
> This makes it easier to decide whether to process the event or not without 
> needing to parse the json message (which is a slower process).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19946) VectorizedRowBatchCtx.recordIdColumnVector cannot be shared between different JVMs

2018-06-19 Thread Eugene Koifman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-19946:
--
Affects Version/s: 3.0.0

> VectorizedRowBatchCtx.recordIdColumnVector cannot be shared between different 
> JVMs
> --
>
> Key: HIVE-19946
> URL: https://issues.apache.org/jira/browse/HIVE-19946
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Teddy Choi
>Assignee: Teddy Choi
>Priority: Major
> Attachments: HIVE-19946.1.patch
>
>
> VectorizedRowBatchCtx.recordIdColumnVector was used temporarily to pass 
> record id column, which is virtual, between a reducer and a mapper. However, 
> when the reducer and the mapper are not in a same JVM, it makes incorrect 
> results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19946) VectorizedRowBatchCtx.recordIdColumnVector cannot be shared between different JVMs

2018-06-19 Thread Eugene Koifman (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eugene Koifman updated HIVE-19946:
--
Component/s: Transactions

> VectorizedRowBatchCtx.recordIdColumnVector cannot be shared between different 
> JVMs
> --
>
> Key: HIVE-19946
> URL: https://issues.apache.org/jira/browse/HIVE-19946
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 3.0.0
>Reporter: Teddy Choi
>Assignee: Teddy Choi
>Priority: Major
> Attachments: HIVE-19946.1.patch
>
>
> VectorizedRowBatchCtx.recordIdColumnVector was used temporarily to pass 
> record id column, which is virtual, between a reducer and a mapper. However, 
> when the reducer and the mapper are not in a same JVM, it makes incorrect 
> results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19922) TestMiniDruidKafkaCliDriver[druidkafkamini_basic] is flaky

2018-06-19 Thread Ashutosh Chauhan (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517593#comment-16517593
 ] 

Ashutosh Chauhan commented on HIVE-19922:
-

Hi [~pvary]  I looked at last 10 runs and haven't found this test to be flaky. 
Can you point me to the logs where it failed. Would like to see how can we 
increase its robustness (instead of disabling it). cc: [~nishantbangarwa]

> TestMiniDruidKafkaCliDriver[druidkafkamini_basic] is flaky
> --
>
> Key: HIVE-19922
> URL: https://issues.apache.org/jira/browse/HIVE-19922
> Project: Hive
>  Issue Type: Bug
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-19922.2.patch, HIVE-19922.3.patch, HIVE-19922.patch
>
>
> Consistently failing in the last 4 runs.
> See:
> [https://builds.apache.org/job/PreCommit-HIVE-Build/11824/testReport/org.apache.hadoop.hive.cli/TestMiniDruidKafkaCliDriver/testCliDriver_druidkafkamini_basic_/history/]
> Can not reproduce the failure locally :(
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19946) VectorizedRowBatchCtx.recordIdColumnVector cannot be shared between different JVMs

2018-06-19 Thread Sergey Shelukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517594#comment-16517594
 ] 

Sergey Shelukhin commented on HIVE-19946:
-

+1 pending tests.
cc [~mmccline] this is related to VectorMapOperator

> VectorizedRowBatchCtx.recordIdColumnVector cannot be shared between different 
> JVMs
> --
>
> Key: HIVE-19946
> URL: https://issues.apache.org/jira/browse/HIVE-19946
> Project: Hive
>  Issue Type: Bug
>Reporter: Teddy Choi
>Assignee: Teddy Choi
>Priority: Major
> Attachments: HIVE-19946.1.patch
>
>
> VectorizedRowBatchCtx.recordIdColumnVector was used temporarily to pass 
> record id column, which is virtual, between a reducer and a mapper. However, 
> when the reducer and the mapper are not in a same JVM, it makes incorrect 
> results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19103) Nested structure Projection Push Down in Hive with ORC

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517573#comment-16517573
 ] 

Hive QA commented on HIVE-19103:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12928243/HIVE-19103.3.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/11929/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11929/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11929/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-06-19 21:57:36.505
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-11929/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-06-19 21:57:36.509
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at 2394e40 HIVE-19870: HCatalog dynamic partition query can fail, 
if the table path is managed by Sentry (Peter Vary via Marta Kuczora)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 2394e40 HIVE-19870: HCatalog dynamic partition query can fail, 
if the table path is managed by Sentry (Peter Vary via Marta Kuczora)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-06-19 21:57:37.511
+ rm -rf ../yetus_PreCommit-HIVE-Build-11929
+ mkdir ../yetus_PreCommit-HIVE-Build-11929
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-11929
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-11929/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: a/pom.xml: does not exist in index
error: a/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java: does 
not exist in index
error: 
a/ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestInputOutputFormat.java: does 
not exist in index
error: patch failed: pom.xml:194
Falling back to three-way merge...
Applied patch to 'pom.xml' with conflicts.
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java:34
error: repository lacks the necessary blob to fall back on 3-way merge.
error: ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcInputFormat.java: patch 
does not apply
error: patch failed: 
ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestInputOutputFormat.java:1706
error: repository lacks the necessary blob to fall back on 3-way merge.
error: ql/src/test/org/apache/hadoop/hive/ql/io/orc/TestInputOutputFormat.java: 
patch does not apply
fatal: git diff header lacks filename information when removing 2 leading 
pathname components (line 5)
The patch does not appear to apply with p0, p1, or p2
+ result=1
+ '[' 1 -ne 0 ']'
+ rm -rf yetus_PreCommit-HIVE-Build-11929
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12928243 - PreCommit-HIVE-Build

> Nested structure Projection Push Down in Hive with ORC
> --
>
> Key: HIVE-19103
> URL: https://issues.apache.org/jira/browse/HIVE-19103
> Project: Hive
>  Issue Type: Improvement
>  Components: Hive, ORC
>Reporter: Ashish Sharma
>Assignee: Ashish Sharma
>Priority: Critical
>  Labels: pull-request-available
> Attachments: HIVE-19103.2.patch, HIVE-19103.3.patch, HIVE-19103.patch
>
>
> Reading required columns only in nested structure schema
> Example - 
> *Current state* - 
> Schema  -  struct,g:string>>
> Query - select c.e.f from t where c.e.f 

[jira] [Commented] (HIVE-19899) Support stored as JsonFile

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517572#comment-16517572
 ] 

Hive QA commented on HIVE-19899:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12928240/HIVE-19899.4.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 14507 tests 
executed
*Failed tests:*
{noformat}
TestReplicationOnHDFSEncryptedZones - did not produce a TEST-*.xml file (likely 
timed out) (batchId=234)
TestReplicationScenarios - did not produce a TEST-*.xml file (likely timed out) 
(batchId=234)
TestReplicationScenariosAcrossInstances - did not produce a TEST-*.xml file 
(likely timed out) (batchId=234)
org.apache.hive.hcatalog.pig.TestHCatLoaderComplexSchema.testMapNullKey[6] 
(batchId=199)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/11928/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11928/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11928/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 4 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12928240 - PreCommit-HIVE-Build

> Support stored as JsonFile 
> ---
>
> Key: HIVE-19899
> URL: https://issues.apache.org/jira/browse/HIVE-19899
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.0.0
> Environment: This is to add "stored as jsonfile" support for json 
> file format. 
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Attachments: HIVE-19899.1.patch, HIVE-19899.2.patch, 
> HIVE-19899.3.patch, HIVE-19899.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19899) Support stored as JsonFile

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517555#comment-16517555
 ] 

Hive QA commented on HIVE-19899:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
48s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
47s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
27s{color} | {color:blue} hcatalog/hcatalog-pig-adapter in master has 2 extant 
Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
1s{color} | {color:blue} ql in master has 2280 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
38s{color} | {color:red} ql: The patch generated 1 new + 3 unchanged - 0 fixed 
= 4 total (was 3) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 12s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-11928/dev-support/hive-personality.sh
 |
| git revision | master / 2394e40 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-11928/yetus/diff-checkstyle-ql.txt
 |
| modules | C: hcatalog/hcatalog-pig-adapter ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-11928/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Support stored as JsonFile 
> ---
>
> Key: HIVE-19899
> URL: https://issues.apache.org/jira/browse/HIVE-19899
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Affects Versions: 3.0.0
> Environment: This is to add "stored as jsonfile" support for json 
> file format. 
>Reporter: Aihua Xu
>Assignee: Aihua Xu
>Priority: Major
> Attachments: HIVE-19899.1.patch, HIVE-19899.2.patch, 
> HIVE-19899.3.patch, HIVE-19899.4.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17751) Separate HMS Client and HMS server into separate sub-modules

2018-06-19 Thread Alexander Kolbasov (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517547#comment-16517547
 ] 

Alexander Kolbasov commented on HIVE-17751:
---

Coming back to work on this. I am doing refactoring of standalone-metastore to 
separate common, server and client.

> Separate HMS Client and HMS server into separate sub-modules
> 
>
> Key: HIVE-17751
> URL: https://issues.apache.org/jira/browse/HIVE-17751
> Project: Hive
>  Issue Type: Sub-task
>  Components: Standalone Metastore
>Reporter: Vihang Karajgaonkar
>Assignee: Alexander Kolbasov
>Priority: Major
> Attachments: HIVE-17751.06-standalone-metastore.patch
>
>
> external applications which are interfacing with HMS should ideally only 
> include HMSClient library instead of one big library containing server as 
> well. We should ideally have a thin client library so that cross version 
> support for external applications is easier. We should sub-divide the 
> standalone module into possibly 3 modules (one for common classes, one for 
> client classes and one for server) or 2 sub-modules (one for client and one 
> for server) so that we can generate separate jars for HMS client and server.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18916) SparkClientImpl doesn't error out if spark-submit fails

2018-06-19 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-18916:

Attachment: HIVE-18916.5.patch

> SparkClientImpl doesn't error out if spark-submit fails
> ---
>
> Key: HIVE-18916
> URL: https://issues.apache.org/jira/browse/HIVE-18916
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18916.1.WIP.patch, HIVE-18916.2.patch, 
> HIVE-18916.3.patch, HIVE-18916.4.patch, HIVE-18916.5.patch
>
>
> If {{spark-submit}} returns a non-zero exit code, {{SparkClientImpl}} will 
> simply log the exit code, but won't throw an error. Eventually, the 
> connection timeout will get triggered and an exception like {{Timed out 
> waiting for client connection}} will be logged, which is pretty misleading.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19532) fix tests for master-txnstats branch

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517535#comment-16517535
 ] 

Hive QA commented on HIVE-19532:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
36s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
16s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
27s{color} | {color:blue} storage-api in master has 48 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
7s{color} | {color:blue} ql in master has 2280 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
18s{color} | {color:blue} standalone-metastore in master has 227 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  8m 
32s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
7s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
21s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
10s{color} | {color:red} storage-api: The patch generated 1 new + 3 unchanged - 
0 fixed = 4 total (was 3) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  2m 
37s{color} | {color:red} root: The patch generated 75 new + 2746 unchanged - 32 
fixed = 2821 total (was 2778) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} itests/hcatalog-unit: The patch generated 1 new + 28 
unchanged - 0 fixed = 29 total (was 28) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
49s{color} | {color:red} ql: The patch generated 13 new + 989 unchanged - 15 
fixed = 1002 total (was 1004) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
38s{color} | {color:red} standalone-metastore: The patch generated 60 new + 
1726 unchanged - 17 fixed = 1786 total (was 1743) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 108 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
16s{color} | {color:red} ql generated 1 new + 2280 unchanged - 0 fixed = 2281 
total (was 2280) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  3m 
19s{color} | {color:red} standalone-metastore generated 6 new + 226 unchanged - 
1 fixed = 232 total (was 227) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 10m  
8s{color} | {color:red} root generated 2 new + 369 unchanged - 0 fixed = 371 
total (was 369) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  1m 
23s{color} | {color:red} standalone-metastore generated 2 new + 54 unchanged - 
0 fixed = 56 total (was 54) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Nullcheck of tableSnapshot at line 4432 of value previously dereferenced 
in 

[jira] [Commented] (HIVE-19532) fix tests for master-txnstats branch

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517516#comment-16517516
 ] 

Hive QA commented on HIVE-19532:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12928235/HIVE-19532.04.patch

{color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 339 failed/errored test(s), 14539 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_nullscan] 
(batchId=69)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] 
(batchId=55)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[columnStatsUpdateForStatsOptimizer_2]
 (batchId=31)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[metadata_only_queries] 
(batchId=32)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_all] (batchId=70)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_default] (batchId=86)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=81)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_nonpart] 
(batchId=13)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_part2] (batchId=21)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[stats_sizebug] 
(batchId=84)
org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] 
(batchId=258)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning]
 (batchId=150)
org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[mm_all] 
(batchId=152)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_no_buckets]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_vectorization_original]
 (batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[columnStatsUpdateForStatsOptimizer_1]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_optimization_acid]
 (batchId=166)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[enforce_constraint_notnull]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_into_default_keyword]
 (batchId=155)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid2] 
(batchId=157)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] 
(batchId=175)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_3]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_4]
 (batchId=158)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_5]
 (batchId=155)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_rebuild_dummy]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[metadata_only_queries]
 (batchId=162)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mm_conversions]
 (batchId=174)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mm_exim] 
(batchId=178)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation]
 (batchId=164)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_transactional]
 (batchId=158)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[stats_date] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_remove_26]
 (batchId=167)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[acid_vectorization_original_tez]
 (batchId=106)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[metadata_only_queries]
 (batchId=122)
org.apache.hadoop.hive.cli.TestSparkNegativeCliDriver.testCliDriver[groupby2_map_skew_multi_distinct]
 (batchId=260)
org.apache.hadoop.hive.cli.TestSparkNegativeCliDriver.testCliDriver[groupby2_multi_distinct]
 (batchId=260)
org.apache.hadoop.hive.cli.TestSparkNegativeCliDriver.testCliDriver[groupby3_map_skew_multi_distinct]
 (batchId=260)
org.apache.hadoop.hive.cli.TestSparkNegativeCliDriver.testCliDriver[groupby3_multi_distinct]
 (batchId=260)
org.apache.hadoop.hive.cli.TestSparkNegativeCliDriver.testCliDriver[spark_job_max_tasks]
 (batchId=260)
org.apache.hadoop.hive.cli.TestSparkNegativeCliDriver.testCliDriver[spark_stage_max_tasks]
 (batchId=260)

[jira] [Comment Edited] (HIVE-19929) Vectorization: Recheck for vectorization wrong results/execution failures

2018-06-19 Thread Matt McCline (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517427#comment-16517427
 ] 

Matt McCline edited comment on HIVE-19929 at 6/19/18 7:32 PM:
--

TestCliDriver::type_change_test_int --> might need- - SORT_QUERY_RESULTS ?

TestCliDriver::delete_orig_table --> -HIVE-19109  ... (but it already 
committed?)-

TestCliDriver::join0 and parallel_join0 --> EXPLAIN plan difference (OK)

TestCliDriver::vector_left_outer_join2 --> EXPLAIN plan difference (OK)

TestCliDriver::vectorization_numeric_overflows --> EXPLAIN plan difference (OK)

TestCliDriver::vectorized_timestamp --> EXPLAIN plan difference (OK)

TestCliDriver::dynpart_sort_opt_bucketing might need -- SORT_QUERY_RESULTS ?

TestJdbcDriver2.testResultSetMetaData ???

TestStreaming.testStreamBucketingMatchesRegularBucketing ???

TestStreamingDynamicPartitioning.testDPStreamBucketingMatchesRegularBucketing 
???


was (Author: mmccline):
TestCliDriver::type_change_test_int --> might need -- SORT_QUERY_RESULTS ?

TestCliDriver::delete_orig_table --> HIVE-19109

TestCliDriver::join0 and parallel_join0 --> EXPLAIN plan difference (OK)

TestCliDriver::vector_left_outer_join2 --> EXPLAIN plan difference (OK)

TestCliDriver::vectorization_numeric_overflows --> EXPLAIN plan difference (OK)

TestCliDriver::vectorized_timestamp --> EXPLAIN plan difference (OK)

TestCliDriver::dynpart_sort_opt_bucketing might need -- SORT_QUERY_RESULTS ?

TestJdbcDriver2.testResultSetMetaData ???

TestStreaming.testStreamBucketingMatchesRegularBucketing ???

TestStreamingDynamicPartitioning.testDPStreamBucketingMatchesRegularBucketing 
???

> Vectorization: Recheck for vectorization wrong results/execution failures
> -
>
> Key: HIVE-19929
> URL: https://issues.apache.org/jira/browse/HIVE-19929
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-19929.01.patch
>
>
> Use test variables hive.test.vectorized.execution.enabled.override=enable and 
> hive.test.vectorization.suppress.explain.execution.mode=true to look for 
> wrong results/execution failures when vectorization is forced ON and 
> "Execution mode: vectorized" is suppressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19920) Schematool fails in embedded mode when auth is on

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517444#comment-16517444
 ] 

Hive QA commented on HIVE-19920:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12928234/HIVE-19920.3.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14533 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/11926/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11926/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11926/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12928234 - PreCommit-HIVE-Build

> Schematool fails in embedded mode when auth is on
> -
>
> Key: HIVE-19920
> URL: https://issues.apache.org/jira/browse/HIVE-19920
> Project: Hive
>  Issue Type: Bug
>  Components: Standalone Metastore
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-19920.1.patch, HIVE-19920.2.patch, 
> HIVE-19920.3.patch
>
>
> This is a follow up of HIVE-19775. We need to override more properties in 
> embedded hs2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19888) Misleading "METASTORE_FILTER_HOOK will be ignored" warning from SessionState

2018-06-19 Thread Marcelo Vanzin (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517443#comment-16517443
 ] 

Marcelo Vanzin commented on HIVE-19888:
---

Is there anything I should look at here? I can't even find the logs for the 
test failure, and I also don't see how any test would be affected by my 
change...

> Misleading "METASTORE_FILTER_HOOK will be ignored" warning from SessionState
> 
>
> Key: HIVE-19888
> URL: https://issues.apache.org/jira/browse/HIVE-19888
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Affects Versions: 3.0.0
>Reporter: Marcelo Vanzin
>Assignee: Marcelo Vanzin
>Priority: Minor
> Attachments: HIVE-19888.1.patch
>
>
> When I run things on my test cluster I see things like this in my logs:
> {noformat}
> 18/03/14 13:35:20 WARN session.SessionState: METASTORE_FILTER_HOOK will be 
> ignored, since hive.security.authorization.manager is set to instance of 
> HiveAuthorizerFactory.
> 18/03/14 13:35:21 WARN session.SessionState: METASTORE_FILTER_HOOK will be 
> ignored, since hive.security.authorization.manager is set to instance of 
> HiveAuthorizerFactory.
> {noformat}
> That's because the code in SessionState.java is wrong:
> {code}
> String metastoreHook = 
> sessionConf.get(ConfVars.METASTORE_FILTER_HOOK.name());
> if 
> (!ConfVars.METASTORE_FILTER_HOOK.getDefaultValue().equals(metastoreHook) &&
> 
> !AuthorizationMetaStoreFilterHook.class.getName().equals(metastoreHook)) {
>   LOG.warn(ConfVars.METASTORE_FILTER_HOOK.name() +
>   " will be ignored, since hive.security.authorization.manager" +
>   " is set to instance of HiveAuthorizerFactory.");
> }
> {code}
> It's using {{.name()}} which is the enum name, not the actual config key.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19929) Vectorization: Recheck for vectorization wrong results/execution failures

2018-06-19 Thread Matt McCline (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517427#comment-16517427
 ] 

Matt McCline commented on HIVE-19929:
-

TestCliDriver::type_change_test_int --> might need -- SORT_QUERY_RESULTS ?

TestCliDriver::delete_orig_table --> HIVE-19109

TestCliDriver::join0 and parallel_join0 --> EXPLAIN plan difference (OK)

TestCliDriver::vector_left_outer_join2 --> EXPLAIN plan difference (OK)

TestCliDriver::vectorization_numeric_overflows --> EXPLAIN plan difference (OK)

TestCliDriver::vectorized_timestamp --> EXPLAIN plan difference (OK)

TestCliDriver::dynpart_sort_opt_bucketing might need -- SORT_QUERY_RESULTS ?

TestJdbcDriver2.testResultSetMetaData ???

TestStreaming.testStreamBucketingMatchesRegularBucketing ???

TestStreamingDynamicPartitioning.testDPStreamBucketingMatchesRegularBucketing 
???

> Vectorization: Recheck for vectorization wrong results/execution failures
> -
>
> Key: HIVE-19929
> URL: https://issues.apache.org/jira/browse/HIVE-19929
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-19929.01.patch
>
>
> Use test variables hive.test.vectorized.execution.enabled.override=enable and 
> hive.test.vectorization.suppress.explain.execution.mode=true to look for 
> wrong results/execution failures when vectorization is forced ON and 
> "Execution mode: vectorized" is suppressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19920) Schematool fails in embedded mode when auth is on

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517411#comment-16517411
 ] 

Hive QA commented on HIVE-19920:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
18s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  3m 
10s{color} | {color:blue} standalone-metastore in master has 227 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
18s{color} | {color:red} standalone-metastore: The patch generated 1 new + 32 
unchanged - 0 fixed = 33 total (was 32) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-11926/dev-support/hive-personality.sh
 |
| git revision | master / 2394e40 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-11926/yetus/diff-checkstyle-standalone-metastore.txt
 |
| modules | C: standalone-metastore U: standalone-metastore |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-11926/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Schematool fails in embedded mode when auth is on
> -
>
> Key: HIVE-19920
> URL: https://issues.apache.org/jira/browse/HIVE-19920
> Project: Hive
>  Issue Type: Bug
>  Components: Standalone Metastore
>Reporter: Daniel Dai
>Assignee: Daniel Dai
>Priority: Major
> Attachments: HIVE-19920.1.patch, HIVE-19920.2.patch, 
> HIVE-19920.3.patch
>
>
> This is a follow up of HIVE-19775. We need to override more properties in 
> embedded hs2.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19016) Vectorization and Parquet: Disable vectorization for nested complex types

2018-06-19 Thread Matt McCline (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517409#comment-16517409
 ] 

Matt McCline commented on HIVE-19016:
-

[~vihangk1] can you give this a quick review (tests pending)?  Thanks

> Vectorization and Parquet: Disable vectorization for nested complex types
> -
>
> Key: HIVE-19016
> URL: https://issues.apache.org/jira/browse/HIVE-19016
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-19016.01.patch
>
>
> Original title: Vectorization and Parquet: When vectorized, 
> parquet_nested_complex.q produces RuntimeException: Unsupported type used
>  
> Adding "SET hive.vectorized.execution.enabled=true;" to 
> parquet_nested_complex.q triggers this call stack:
> {noformat}
> Caused by: java.lang.RuntimeException: Unsupported type used in 
> list:array>
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkListColumnSupport(VectorizedParquetRecordReader.java:589)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.buildVectorizedParquetReader(VectorizedParquetRecordReader.java:525)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:440)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:401)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:353)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:92)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:360)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> {noformat}
> FYI: [~vihangk1]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19016) Vectorization and Parquet: Disable vectorization for nested complex types

2018-06-19 Thread Matt McCline (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-19016:

Summary: Vectorization and Parquet: Disable vectorization for nested 
complex types  (was: Vectorization and Parquet: When vectorized, 
parquet_nested_complex.q produces RuntimeException: Unsupported type used)

> Vectorization and Parquet: Disable vectorization for nested complex types
> -
>
> Key: HIVE-19016
> URL: https://issues.apache.org/jira/browse/HIVE-19016
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-19016.01.patch
>
>
> Original title: Vectorization and Parquet: When vectorized, 
> parquet_nested_complex.q produces RuntimeException: Unsupported type used
>  
> Adding "SET hive.vectorized.execution.enabled=true;" to 
> parquet_nested_complex.q triggers this call stack:
> {noformat}
> Caused by: java.lang.RuntimeException: Unsupported type used in 
> list:array>
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkListColumnSupport(VectorizedParquetRecordReader.java:589)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.buildVectorizedParquetReader(VectorizedParquetRecordReader.java:525)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:440)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:401)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:353)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:92)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:360)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> {noformat}
> FYI: [~vihangk1]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19016) Vectorization and Parquet: When vectorized, parquet_nested_complex.q produces RuntimeException: Unsupported type used

2018-06-19 Thread Matt McCline (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-19016:

Description: 
Original title: Vectorization and Parquet: When vectorized, 
parquet_nested_complex.q produces RuntimeException: Unsupported type used

 

Adding "SET hive.vectorized.execution.enabled=true;" to 
parquet_nested_complex.q triggers this call stack:
{noformat}
Caused by: java.lang.RuntimeException: Unsupported type used in 
list:array>
at 
org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkListColumnSupport(VectorizedParquetRecordReader.java:589)
 ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.buildVectorizedParquetReader(VectorizedParquetRecordReader.java:525)
 ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:440)
 ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:401)
 ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:353)
 ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:92)
 ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:360)
 ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
{noformat}
FYI: [~vihangk1]

  was:
Adding "SET hive.vectorized.execution.enabled=true;" to 
parquet_nested_complex.q triggers this call stack:

{noformat}
Caused by: java.lang.RuntimeException: Unsupported type used in 
list:array>
at 
org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkListColumnSupport(VectorizedParquetRecordReader.java:589)
 ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.buildVectorizedParquetReader(VectorizedParquetRecordReader.java:525)
 ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:440)
 ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:401)
 ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:353)
 ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:92)
 ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
at 
org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:360)
 ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
{noformat}

FYI: [~vihangk1]


> Vectorization and Parquet: When vectorized, parquet_nested_complex.q produces 
> RuntimeException: Unsupported type used
> -
>
> Key: HIVE-19016
> URL: https://issues.apache.org/jira/browse/HIVE-19016
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-19016.01.patch
>
>
> Original title: Vectorization and Parquet: When vectorized, 
> parquet_nested_complex.q produces RuntimeException: Unsupported type used
>  
> Adding "SET hive.vectorized.execution.enabled=true;" to 
> parquet_nested_complex.q triggers this call stack:
> {noformat}
> Caused by: java.lang.RuntimeException: Unsupported type used in 
> list:array>
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkListColumnSupport(VectorizedParquetRecordReader.java:589)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.buildVectorizedParquetReader(VectorizedParquetRecordReader.java:525)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:440)
>  

[jira] [Commented] (HIVE-19929) Vectorization: Recheck for vectorization wrong results/execution failures

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517391#comment-16517391
 ] 

Hive QA commented on HIVE-19929:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12928232/HIVE-19929.01.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 14514 tests 
executed
*Failed tests:*
{noformat}
TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=257)
TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=257)
TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=257)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[delete_orig_table] 
(batchId=41)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[dynpart_sort_opt_bucketing]
 (batchId=89)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[join0] (batchId=61)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parallel_join0] 
(batchId=77)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[type_change_test_int] 
(batchId=16)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_left_outer_join2] 
(batchId=67)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorization_numeric_overflows]
 (batchId=71)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_timestamp] 
(batchId=80)
org.apache.hive.jdbc.TestJdbcDriver2.testResultSetMetaData (batchId=244)
org.apache.hive.streaming.TestStreaming.testStreamBucketingMatchesRegularBucketing
 (batchId=313)
org.apache.hive.streaming.TestStreamingDynamicPartitioning.testDPStreamBucketingMatchesRegularBucketing
 (batchId=313)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/11925/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11925/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11925/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 14 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12928232 - PreCommit-HIVE-Build

> Vectorization: Recheck for vectorization wrong results/execution failures
> -
>
> Key: HIVE-19929
> URL: https://issues.apache.org/jira/browse/HIVE-19929
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-19929.01.patch
>
>
> Use test variables hive.test.vectorized.execution.enabled.override=enable and 
> hive.test.vectorization.suppress.explain.execution.mode=true to look for 
> wrong results/execution failures when vectorization is forced ON and 
> "Execution mode: vectorized" is suppressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19947) Load Data Rewrite : Explore better way to create temp table object to maintain consistency.

2018-06-19 Thread Deepak Jaiswal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Deepak Jaiswal reassigned HIVE-19947:
-


> Load Data Rewrite : Explore better way to create temp table object to 
> maintain consistency.
> ---
>
> Key: HIVE-19947
> URL: https://issues.apache.org/jira/browse/HIVE-19947
> Project: Hive
>  Issue Type: Task
>Reporter: Deepak Jaiswal
>Assignee: Deepak Jaiswal
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19403) Demote 'Pattern' Logging

2018-06-19 Thread Aihua Xu (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517347#comment-16517347
 ] 

Aihua Xu commented on HIVE-19403:
-

Debug level is enough for such log. +1.

> Demote 'Pattern' Logging
> 
>
> Key: HIVE-19403
> URL: https://issues.apache.org/jira/browse/HIVE-19403
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0, 2.4.0
>Reporter: BELUGA BEHR
>Assignee: gonglinglei
>Priority: Trivial
>  Labels: noob
> Attachments: HIVE-19403.1.patch
>
>
> In the {{DDLTask}} class, there is some logging that is not helpful to a 
> cluster admin and should be demoted to _debug_ level logging.  In fact, in 
> one place in the code, it already is.
> {code}
> LOG.info("pattern: {}", showDatabasesDesc.getPattern());
> LOG.debug("pattern: {}", pattern);
> LOG.info("pattern: {}", showFuncs.getPattern());
> LOG.info("pattern: {}", showTblStatus.getPattern());
> {code}
> Here is an example... as an admin, I can already see what the pattern is, I 
> do not need this extra logging.  It provides no additional context.
> {code:java|title=Example}
> 2018-05-03 03:08:26,354 INFO  org.apache.hadoop.hive.ql.Driver: 
> [HiveServer2-Background-Pool: Thread-101980]: Executing 
> command(queryId=hive_20180503030808_e53c26ef-2280-4eca-929b-668503105e2e): 
> SHOW TABLE EXTENDED FROM my_db LIKE '*'
> 2018-05-03 03:08:26,355 INFO  hive.ql.exec.DDLTask: 
> [HiveServer2-Background-Pool: Thread-101980]: pattern: *
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19929) Vectorization: Recheck for vectorization wrong results/execution failures

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517321#comment-16517321
 ] 

Hive QA commented on HIVE-19929:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
32s{color} | {color:blue} common in master has 62 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 52s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-11925/dev-support/hive-personality.sh
 |
| git revision | master / 2394e40 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: common U: common |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-11925/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Vectorization: Recheck for vectorization wrong results/execution failures
> -
>
> Key: HIVE-19929
> URL: https://issues.apache.org/jira/browse/HIVE-19929
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-19929.01.patch
>
>
> Use test variables hive.test.vectorized.execution.enabled.override=enable and 
> hive.test.vectorization.suppress.explain.execution.mode=true to look for 
> wrong results/execution failures when vectorization is forced ON and 
> "Execution mode: vectorized" is suppressed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19886) Logs may be directed to 2 files if --hiveconf hive.log.file is used

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517306#comment-16517306
 ] 

Hive QA commented on HIVE-19886:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12928231/HIVE-19886.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14533 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/11924/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11924/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11924/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12928231 - PreCommit-HIVE-Build

> Logs may be directed to 2 files if --hiveconf hive.log.file is used
> ---
>
> Key: HIVE-19886
> URL: https://issues.apache.org/jira/browse/HIVE-19886
> Project: Hive
>  Issue Type: Bug
>  Components: Logging
>Affects Versions: 3.1.0, 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Jaume M
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-19886.patch
>
>
> hive launch script explicitly specific log4j2 configuration file to use. The 
> main() methods in HiveServer2 and HiveMetastore reconfigures the logger based 
> on user input via --hiveconf hive.log.file. This may cause logs to end up in 
> 2 different files. Initial logs goes to the file specified in 
> hive-log4j2.properties and after logger reconfiguration the rest of the logs 
> goes to the file specified via --hiveconf hive.log.file. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19403) Demote 'Pattern' Logging

2018-06-19 Thread BELUGA BEHR (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517279#comment-16517279
 ] 

BELUGA BEHR commented on HIVE-19403:


[~aihuaxu] Let's just get this change into the project as-is.

> Demote 'Pattern' Logging
> 
>
> Key: HIVE-19403
> URL: https://issues.apache.org/jira/browse/HIVE-19403
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0, 2.4.0
>Reporter: BELUGA BEHR
>Assignee: gonglinglei
>Priority: Trivial
>  Labels: noob
> Attachments: HIVE-19403.1.patch
>
>
> In the {{DDLTask}} class, there is some logging that is not helpful to a 
> cluster admin and should be demoted to _debug_ level logging.  In fact, in 
> one place in the code, it already is.
> {code}
> LOG.info("pattern: {}", showDatabasesDesc.getPattern());
> LOG.debug("pattern: {}", pattern);
> LOG.info("pattern: {}", showFuncs.getPattern());
> LOG.info("pattern: {}", showTblStatus.getPattern());
> {code}
> Here is an example... as an admin, I can already see what the pattern is, I 
> do not need this extra logging.  It provides no additional context.
> {code:java|title=Example}
> 2018-05-03 03:08:26,354 INFO  org.apache.hadoop.hive.ql.Driver: 
> [HiveServer2-Background-Pool: Thread-101980]: Executing 
> command(queryId=hive_20180503030808_e53c26ef-2280-4eca-929b-668503105e2e): 
> SHOW TABLE EXTENDED FROM my_db LIKE '*'
> 2018-05-03 03:08:26,355 INFO  hive.ql.exec.DDLTask: 
> [HiveServer2-Background-Pool: Thread-101980]: pattern: *
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19886) Logs may be directed to 2 files if --hiveconf hive.log.file is used

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517256#comment-16517256
 ] 

Hive QA commented on HIVE-19886:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
32s{color} | {color:blue} common in master has 62 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-11924/dev-support/hive-personality.sh
 |
| git revision | master / 2394e40 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: common U: common |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-11924/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Logs may be directed to 2 files if --hiveconf hive.log.file is used
> ---
>
> Key: HIVE-19886
> URL: https://issues.apache.org/jira/browse/HIVE-19886
> Project: Hive
>  Issue Type: Bug
>  Components: Logging
>Affects Versions: 3.1.0, 4.0.0
>Reporter: Prasanth Jayachandran
>Assignee: Jaume M
>Priority: Major
>  Labels: pull-request-available
> Attachments: HIVE-19886.patch
>
>
> hive launch script explicitly specific log4j2 configuration file to use. The 
> main() methods in HiveServer2 and HiveMetastore reconfigures the logger based 
> on user input via --hiveconf hive.log.file. This may cause logs to end up in 
> 2 different files. Initial logs goes to the file specified in 
> hive-log4j2.properties and after logger reconfiguration the rest of the logs 
> goes to the file specified via --hiveconf hive.log.file. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19649) Clean up inputs in JDBC PreparedStatement. Add unit tests.

2018-06-19 Thread Mykhailo Kysliuk (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517245#comment-16517245
 ] 

Mykhailo Kysliuk commented on HIVE-19649:
-

Yes, I am using eclipse-styles.xml formatter from 'How to contribute' article. 
This annotations were formatted automatically. 


> Clean up inputs in JDBC PreparedStatement. Add unit tests.
> --
>
> Key: HIVE-19649
> URL: https://issues.apache.org/jira/browse/HIVE-19649
> Project: Hive
>  Issue Type: Test
>Reporter: Mykhailo Kysliuk
>Assignee: Mykhailo Kysliuk
>Priority: Minor
> Attachments: HIVE-19649.01.patch, HIVE-19649.02.patch
>
>
> Add unit tests for feature that was implemented in 
> [HIVE-18788|https://issues.apache.org/jira/browse/HIVE-18788].
> The integration tests are present, but it will be useful to catch errors 
> during module build.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19911) Hive delete queries fail with Invalid table alias or column reference

2018-06-19 Thread Mykhailo Kysliuk (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mykhailo Kysliuk updated HIVE-19911:

Attachment: HIVE-19911.1-branch-2.3.patch

> Hive delete queries fail with Invalid table alias or column reference
> -
>
> Key: HIVE-19911
> URL: https://issues.apache.org/jira/browse/HIVE-19911
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.3.3
>Reporter: Mykhailo Kysliuk
>Priority: Major
> Attachments: HIVE-19911.1-branch-2.3.patch
>
>
> Env:
> hadoop-2.7.0
> hive-2.3.3
> OS:
> centos-release-7-5.1804.el7.centos.x86_64
> Steps to reproduce (at hive cli):
> {code}
> set hive.support.concurrency=true;
> set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
> DROP TABLE IF EXISTS detaillineitem_all;
> DROP TABLE IF EXISTS detaillineitem_all_delete_1526330755128;
> CREATE TABLE `detaillineitem_all`(
>   `detailid` decimal(20,0),
>   `branchnumber` varchar(3)
> ) PARTITIONED BY (
>   `branchnumber_p` varchar(3))
> CLUSTERED BY (
>   detailid)
> INTO 25 BUCKETS
> STORED AS ORC
> TBLPROPERTIES (
>   'orc.compress'='NONE',
>   'transactional'='true');
> CREATE TABLE `detaillineitem_all_delete_1526330755128`(
>   `detailid` decimal(20,0),
>   `branchnumber` varchar(3),
>   `branchnumber_p` varchar(3));
> DELETE from detaillineitem_all WHERE EXISTS (
> SELECT
> 1
> FROM
> detaillineitem_all_delete_1526330755128 AS t1
> WHERE
> (detaillineitem_all.detailid = t1.detailid)
>   AND
> (detaillineitem_all.branchnumber = CAST(t1.branchnumber AS STRING)));
> {code}
> Exception:
> {code}
> 2018-06-15T16:51:48,625 ERROR [f6bd86a7-04e5-4284-9031-3b9a0ccc80f3 main] 
> ql.Driver: FAILED: SemanticException Line 0:-1 Invalid table alias or column 
> reference 'sq_1': (possible column names are: mber)) sq_corr_1)) (tok_where 
> (= 1 1), (. (tok_table_or_col sq_1) sq_corr_1))
> org.apache.hadoop.hive.ql.parse.SemanticException: Line 0:-1 Invalid table 
> alias or column reference 'sq_1': (possible column names are: mber)) 
> sq_corr_1)) (tok_where (= 1 1), (. (tok_table_or_col sq_1) sq_corr_1))
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genAllExprNodeDesc(SemanticAnalyzer.java:11620)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:11568)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:11536)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:11514)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapGroupByForSemijoin(SemanticAnalyzer.java:8416)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genJoinOperator(SemanticAnalyzer.java:8305)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genFilterPlan(SemanticAnalyzer.java:3278)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:9592)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:10549)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:10427)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:11125)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10807)
> at 
> org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeInternal(UpdateDeleteSemanticAnalyzer.java:73)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
> at 
> org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.reparseAndSuperAnalyze(UpdateDeleteSemanticAnalyzer.java:462)
> at 
> org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeDelete(UpdateDeleteSemanticAnalyzer.java:111)
> at 
> org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeInternal(UpdateDeleteSemanticAnalyzer.java:81)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
> at 

[jira] [Updated] (HIVE-19911) Hive delete queries fail with Invalid table alias or column reference

2018-06-19 Thread Mykhailo Kysliuk (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mykhailo Kysliuk updated HIVE-19911:

Status: Patch Available  (was: Open)

> Hive delete queries fail with Invalid table alias or column reference
> -
>
> Key: HIVE-19911
> URL: https://issues.apache.org/jira/browse/HIVE-19911
> Project: Hive
>  Issue Type: Bug
>  Components: Transactions
>Affects Versions: 2.3.3
>Reporter: Mykhailo Kysliuk
>Priority: Major
> Attachments: HIVE-19911.1-branch-2.3.patch
>
>
> Env:
> hadoop-2.7.0
> hive-2.3.3
> OS:
> centos-release-7-5.1804.el7.centos.x86_64
> Steps to reproduce (at hive cli):
> {code}
> set hive.support.concurrency=true;
> set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;
> DROP TABLE IF EXISTS detaillineitem_all;
> DROP TABLE IF EXISTS detaillineitem_all_delete_1526330755128;
> CREATE TABLE `detaillineitem_all`(
>   `detailid` decimal(20,0),
>   `branchnumber` varchar(3)
> ) PARTITIONED BY (
>   `branchnumber_p` varchar(3))
> CLUSTERED BY (
>   detailid)
> INTO 25 BUCKETS
> STORED AS ORC
> TBLPROPERTIES (
>   'orc.compress'='NONE',
>   'transactional'='true');
> CREATE TABLE `detaillineitem_all_delete_1526330755128`(
>   `detailid` decimal(20,0),
>   `branchnumber` varchar(3),
>   `branchnumber_p` varchar(3));
> DELETE from detaillineitem_all WHERE EXISTS (
> SELECT
> 1
> FROM
> detaillineitem_all_delete_1526330755128 AS t1
> WHERE
> (detaillineitem_all.detailid = t1.detailid)
>   AND
> (detaillineitem_all.branchnumber = CAST(t1.branchnumber AS STRING)));
> {code}
> Exception:
> {code}
> 2018-06-15T16:51:48,625 ERROR [f6bd86a7-04e5-4284-9031-3b9a0ccc80f3 main] 
> ql.Driver: FAILED: SemanticException Line 0:-1 Invalid table alias or column 
> reference 'sq_1': (possible column names are: mber)) sq_corr_1)) (tok_where 
> (= 1 1), (. (tok_table_or_col sq_1) sq_corr_1))
> org.apache.hadoop.hive.ql.parse.SemanticException: Line 0:-1 Invalid table 
> alias or column reference 'sq_1': (possible column names are: mber)) 
> sq_corr_1)) (tok_where (= 1 1), (. (tok_table_or_col sq_1) sq_corr_1))
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genAllExprNodeDesc(SemanticAnalyzer.java:11620)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:11568)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:11536)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genExprNodeDesc(SemanticAnalyzer.java:11514)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genMapGroupByForSemijoin(SemanticAnalyzer.java:8416)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genJoinOperator(SemanticAnalyzer.java:8305)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genFilterPlan(SemanticAnalyzer.java:3278)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:9592)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:10549)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:10427)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genOPTree(SemanticAnalyzer.java:11125)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
> at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:10807)
> at 
> org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeInternal(UpdateDeleteSemanticAnalyzer.java:73)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
> at 
> org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.reparseAndSuperAnalyze(UpdateDeleteSemanticAnalyzer.java:462)
> at 
> org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeDelete(UpdateDeleteSemanticAnalyzer.java:111)
> at 
> org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeInternal(UpdateDeleteSemanticAnalyzer.java:81)
> at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
> at 

[jira] [Commented] (HIVE-19821) Distributed HiveServer2

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517235#comment-16517235
 ] 

Hive QA commented on HIVE-19821:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12928256/HIVE-19821.2.WIP.patch

{color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 1294 failed/errored test(s), 14430 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.org.apache.hadoop.hive.cli.TestAccumuloCliDriver
 (batchId=249)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[buckets] 
(batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[create_like] 
(batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ctas_blobstore_to_blobstore]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ctas_blobstore_to_hdfs]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ctas_hdfs_to_blobstore]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[explain] 
(batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[having] 
(batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_blobstore_to_blobstore]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_blobstore_to_local]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_blobstore_to_warehouse]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_addpartition_local_to_blobstore]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_blobstore]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_blobstore_nonpart]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_local]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_warehouse]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_blobstore_to_warehouse_nonpart]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[import_local_to_blobstore]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_blobstore_to_blobstore]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_empty_into_blobstore]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_into_dynamic_partitions]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_into_table]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_directory]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_dynamic_partitions]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_dynamic_partitions_merge_move]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_dynamic_partitions_merge_only]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_dynamic_partitions_move_only]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_table]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[join2] 
(batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[join] 
(batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[load_data] 
(batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[map_join] 
(batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[multiple_agg] 
(batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[multiple_db] 
(batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[nested_outer_join]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[orc_buckets] 
(batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[orc_format_nonpart]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[orc_format_part]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[orc_nonstd_partitions_loc]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ptf_general_queries]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ptf_persistence]
 (batchId=260)
org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[rcfile_buckets] 
(batchId=260)

[jira] [Updated] (HIVE-19812) Disable external table replication by default via a configuration property

2018-06-19 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-19812:
---
Attachment: HIVE-19812.06.patch

> Disable external table replication by default via a configuration property
> --
>
> Key: HIVE-19812
> URL: https://issues.apache.org/jira/browse/HIVE-19812
> Project: Hive
>  Issue Type: Task
>  Components: repl
>Affects Versions: 3.1.0, 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.1.0, 4.0.0
>
> Attachments: HIVE-19812.01.patch, HIVE-19812.02.patch, 
> HIVE-19812.03.patch, HIVE-19812.04.patch, HIVE-19812.05.patch, 
> HIVE-19812.06.patch
>
>
> use a hive config property to allow external table replication. set this 
> property by default to prevent external table replication.
> for metadata only hive repl always export metadata for external tables.
>  
> REPL_DUMP_EXTERNAL_TABLES("hive.repl.dump.include.external.tables", false,
> "Indicates if repl dump should include information about external tables. It 
> should be \n"
> + "used in conjunction with 'hive.repl.dump.metadata.only' set to false. if 
> 'hive.repl.dump.metadata.only' \n"
> + " is set to true then this config parameter has no effect as external table 
> meta data is flushed \n"
> + " always by default.")
> This should be done for only replication dump and not for export



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19821) Distributed HiveServer2

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517216#comment-16517216
 ] 

Hive QA commented on HIVE-19821:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
49s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
51s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
32s{color} | {color:blue} common in master has 62 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
37s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
43s{color} | {color:blue} itests/util in master has 55 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
10s{color} | {color:blue} ql in master has 2280 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
42s{color} | {color:blue} service in master has 48 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
24s{color} | {color:blue} spark-client in master has 10 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
23s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
27s{color} | {color:red} util in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
32s{color} | {color:red} ql in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
22s{color} | {color:red} service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
27s{color} | {color:red} util in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
22s{color} | {color:red} service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 27s{color} 
| {color:red} util in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 22s{color} 
| {color:red} service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
17s{color} | {color:red} itests/hive-unit: The patch generated 20 new + 0 
unchanged - 0 fixed = 20 total (was 0) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
43s{color} | {color:red} ql: The patch generated 46 new + 112 unchanged - 2 
fixed = 158 total (was 114) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
10s{color} | {color:red} spark-client: The patch generated 7 new + 26 unchanged 
- 2 fixed = 33 total (was 28) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
24s{color} | {color:red} util in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
22s{color} | {color:red} ql generated 2 new + 2280 unchanged - 0 fixed = 2282 
total (was 2280) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
20s{color} | {color:red} service in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
58s{color} | {color:red} ql generated 4 new + 96 unchanged - 4 fixed = 100 
total (was 100) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense 

[jira] [Updated] (HIVE-19829) Incremental replication load should create tasks in execution phase rather than semantic phase

2018-06-19 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-19829:
---
Attachment: HIVE-19829.07.patch

> Incremental replication load should create tasks in execution phase rather 
> than semantic phase
> --
>
> Key: HIVE-19829
> URL: https://issues.apache.org/jira/browse/HIVE-19829
> Project: Hive
>  Issue Type: Task
>  Components: repl
>Affects Versions: 3.1.0, 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-19829.01.patch, HIVE-19829.02.patch, 
> HIVE-19829.03.patch, HIVE-19829.04.patch, HIVE-19829.06.patch, 
> HIVE-19829.07.patch, HIVE-19829.07.patch
>
>
> Split the incremental load into multiple iterations. In each iteration create 
> number of tasks equal to the configured value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19267) Create/Replicate ACID Write event

2018-06-19 Thread mahesh kumar behera (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-19267:
---
Attachment: HIVE-19267.17.patch

> Create/Replicate ACID Write event
> -
>
> Key: HIVE-19267
> URL: https://issues.apache.org/jira/browse/HIVE-19267
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl, Transactions
>Affects Versions: 3.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: ACID, DR, pull-request-available, replication
> Attachments: HIVE-19267.01.patch, HIVE-19267.02.patch, 
> HIVE-19267.03.patch, HIVE-19267.04.patch, HIVE-19267.05.patch, 
> HIVE-19267.06.patch, HIVE-19267.07.patch, HIVE-19267.08.patch, 
> HIVE-19267.09.patch, HIVE-19267.10.patch, HIVE-19267.11.patch, 
> HIVE-19267.12.patch, HIVE-19267.13.patch, HIVE-19267.14.patch, 
> HIVE-19267.15.patch, HIVE-19267.16.patch, HIVE-19267.17.patch, 
> HIVE-19267.17.patch
>
>
>  
> h1. Replicate ACID write Events
>  * Create new EVENT_WRITE event with related message format to log the write 
> operations with in a txn along with data associated.
>  * Log this event when perform any writes (insert into, insert overwrite, 
> load table, delete, update, merge, truncate) on table/partition.
>  * If a single MERGE/UPDATE/INSERT/DELETE statement operates on multiple 
> partitions, then need to log one event per partition.
>  * DbNotificationListener should log this type of event to special metastore 
> table named "MTxnWriteNotificationLog".
>  * This table should maintain a map of txn ID against list of 
> tables/partitions written by given txn.
>  * The entry for a given txn should be removed by the cleaner thread that 
> removes the expired events from EventNotificationTable.
> h1. Replicate Commit Txn operation (with writes)
> Add new EVENT_COMMIT_TXN to log the metadata/data of all tables/partitions 
> modified within the txn.
> *Source warehouse:*
>  * This event should read the EVENT_WRITEs from "MTxnWriteNotificationLog" 
> metastore table to consolidate the list of tables/partitions modified within 
> this txn scope.
>  * Based on the list of tables/partitions modified and table Write ID, need 
> to compute the list of delta files added by this txn.
>  * Repl dump should read this message and dump the metadata and delta files 
> list.
> *Target warehouse:*
>  * Ensure snapshot isolation at target for on-going read txns which shouldn't 
> view the data replicated from committed txn. (Ensured with open and allocate 
> write ID events).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19882) Fix QTestUtil session lifecycle

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517161#comment-16517161
 ] 

Hive QA commented on HIVE-19882:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12928298/HIVE-19882.06.patch

{color:green}SUCCESS:{color} +1 due to 26 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 32 failed/errored test(s), 14534 tests 
executed
*Failed tests:*
{noformat}
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_custom_key2]
 (batchId=249)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_custom_key]
 (batchId=249)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_index] 
(batchId=249)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_joins] 
(batchId=249)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_predicate_pushdown]
 (batchId=249)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries]
 (batchId=249)
org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_single_sourced_multi_insert]
 (batchId=249)
org.apache.hadoop.hive.cli.TestHBaseNegativeCliDriver.testCliDriver[cascade_dbdrop]
 (batchId=255)
org.apache.hadoop.hive.cli.TestHBaseNegativeCliDriver.testCliDriver[generatehfiles_require_family_path]
 (batchId=255)
org.apache.hadoop.hive.cli.TestHBaseNegativeCliDriver.testCliDriver[hbase_ddl] 
(batchId=255)
org.apache.hadoop.hive.ql.parse.TestParseNegativeDriver.testCliDriver[ambiguous_join_col]
 (batchId=256)
org.apache.hadoop.hive.ql.parse.TestParseNegativeDriver.testCliDriver[duplicate_alias]
 (batchId=256)
org.apache.hadoop.hive.ql.parse.TestParseNegativeDriver.testCliDriver[insert_wrong_number_columns]
 (batchId=256)
org.apache.hadoop.hive.ql.parse.TestParseNegativeDriver.testCliDriver[invalid_dot]
 (batchId=256)
org.apache.hadoop.hive.ql.parse.TestParseNegativeDriver.testCliDriver[invalid_function_param2]
 (batchId=256)
org.apache.hadoop.hive.ql.parse.TestParseNegativeDriver.testCliDriver[invalid_index]
 (batchId=256)
org.apache.hadoop.hive.ql.parse.TestParseNegativeDriver.testCliDriver[missing_overwrite]
 (batchId=256)
org.apache.hadoop.hive.ql.parse.TestParseNegativeDriver.testCliDriver[nonkey_groupby]
 (batchId=256)
org.apache.hadoop.hive.ql.parse.TestParseNegativeDriver.testCliDriver[quoted_string]
 (batchId=256)
org.apache.hadoop.hive.ql.parse.TestParseNegativeDriver.testCliDriver[unknown_column1]
 (batchId=256)
org.apache.hadoop.hive.ql.parse.TestParseNegativeDriver.testCliDriver[unknown_column2]
 (batchId=256)
org.apache.hadoop.hive.ql.parse.TestParseNegativeDriver.testCliDriver[unknown_column3]
 (batchId=256)
org.apache.hadoop.hive.ql.parse.TestParseNegativeDriver.testCliDriver[unknown_column4]
 (batchId=256)
org.apache.hadoop.hive.ql.parse.TestParseNegativeDriver.testCliDriver[unknown_column5]
 (batchId=256)
org.apache.hadoop.hive.ql.parse.TestParseNegativeDriver.testCliDriver[unknown_column6]
 (batchId=256)
org.apache.hadoop.hive.ql.parse.TestParseNegativeDriver.testCliDriver[unknown_function1]
 (batchId=256)
org.apache.hadoop.hive.ql.parse.TestParseNegativeDriver.testCliDriver[unknown_function2]
 (batchId=256)
org.apache.hadoop.hive.ql.parse.TestParseNegativeDriver.testCliDriver[unknown_function3]
 (batchId=256)
org.apache.hadoop.hive.ql.parse.TestParseNegativeDriver.testCliDriver[unknown_function4]
 (batchId=256)
org.apache.hadoop.hive.ql.parse.TestParseNegativeDriver.testCliDriver[unknown_table1]
 (batchId=256)
org.apache.hadoop.hive.ql.parse.TestParseNegativeDriver.testCliDriver[wrong_distinct1]
 (batchId=256)
org.apache.hadoop.hive.ql.parse.TestParseNegativeDriver.testCliDriver[wrong_distinct2]
 (batchId=256)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/11922/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11922/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11922/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 32 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12928298 - PreCommit-HIVE-Build

> Fix QTestUtil session lifecycle
> ---
>
> Key: HIVE-19882
> URL: https://issues.apache.org/jira/browse/HIVE-19882
> Project: Hive
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-19882.01.patch, HIVE-19882.02.patch, 
> HIVE-19882.03.patch, 

[jira] [Assigned] (HIVE-19922) TestMiniDruidKafkaCliDriver[druidkafkamini_basic] is flaky

2018-06-19 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary reassigned HIVE-19922:
-

Assignee: Peter Vary

> TestMiniDruidKafkaCliDriver[druidkafkamini_basic] is flaky
> --
>
> Key: HIVE-19922
> URL: https://issues.apache.org/jira/browse/HIVE-19922
> Project: Hive
>  Issue Type: Bug
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-19922.2.patch, HIVE-19922.3.patch, HIVE-19922.patch
>
>
> Consistently failing in the last 4 runs.
> See:
> [https://builds.apache.org/job/PreCommit-HIVE-Build/11824/testReport/org.apache.hadoop.hive.cli/TestMiniDruidKafkaCliDriver/testCliDriver_druidkafkamini_basic_/history/]
> Can not reproduce the failure locally :(
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18140) Partitioned tables statistics can go wrong in basic stats mixed case

2018-06-19 Thread Zoltan Haindrich (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517140#comment-16517140
 ] 

Zoltan Haindrich commented on HIVE-18140:
-

[~ashutoshc] Could you please take a look?

> Partitioned tables statistics can go wrong in basic stats mixed case
> 
>
> Key: HIVE-18140
> URL: https://issues.apache.org/jira/browse/HIVE-18140
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-18140.01.patch, HIVE-18140.01wip01.patch, 
> HIVE-18140.01wip03.patch, HIVE-18140.01wip04.patch, HIVE-18140.02.patch, 
> HIVE-18140.02wip01.patch, HIVE-18140.03.patch, HIVE-19140.02wip02.patch, 
> HIVE-19727.02wip03.patch
>
>
> suppose the following scenario:
> * part1 has basic stats {{RC=10,DS=1K}}
> * all other partition has no basic stats (and a bunch of rows)
> then 
> [this|https://github.com/apache/hive/blob/d9924ab3e285536f7e2cc15ecbea36a78c59c66d/ql/src/java/org/apache/hadoop/hive/ql/stats/StatsUtils.java#L378]
>  condition would be false; which in turn produces estimations for the whole 
> partitioned table: {{RC=10,DS=1K}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-12342) Set default value of hive.optimize.index.filter to true

2018-06-19 Thread Igor Kryvenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-12342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Kryvenko updated HIVE-12342:
-
Attachment: HIVE-12342.18.patch

> Set default value of hive.optimize.index.filter to true
> ---
>
> Key: HIVE-12342
> URL: https://issues.apache.org/jira/browse/HIVE-12342
> Project: Hive
>  Issue Type: Task
>  Components: Configuration
>Reporter: Ashutosh Chauhan
>Assignee: Igor Kryvenko
>Priority: Major
> Attachments: HIVE-12342.05.patch, HIVE-12342.06.patch, 
> HIVE-12342.07.patch, HIVE-12342.08.patch, HIVE-12342.09.patch, 
> HIVE-12342.1.patch, HIVE-12342.10.patch, HIVE-12342.11.patch, 
> HIVE-12342.12.patch, HIVE-12342.13.patch, HIVE-12342.14.patch, 
> HIVE-12342.15.patch, HIVE-12342.16.patch, HIVE-12342.17.patch, 
> HIVE-12342.18.patch, HIVE-12342.2.patch, HIVE-12342.3.patch, 
> HIVE-12342.4.patch, HIVE-12342.patch
>
>
> This configuration governs ppd for storage layer. When applicable, it will 
> always help. It should be on by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19882) Fix QTestUtil session lifecycle

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517090#comment-16517090
 ] 

Hive QA commented on HIVE-19882:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
29s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
38s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
45s{color} | {color:blue} itests/util in master has 55 extant Findbugs 
warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m 
11s{color} | {color:blue} ql in master has 2280 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
45s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
8s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
16s{color} | {color:green} root: The patch generated 0 new + 1541 unchanged - 
23 fixed = 1541 total (was 1564) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} The patch hive-unit passed checkstyle {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} itests/util: The patch generated 0 new + 111 
unchanged - 23 fixed = 111 total (was 134) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
48s{color} | {color:green} The patch ql passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} hive-unit in the patch passed. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
53s{color} | {color:green} itests/util generated 0 new + 52 unchanged - 3 fixed 
= 52 total (was 55) {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
17s{color} | {color:green} ql in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 65m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-11922/dev-support/hive-personality.sh
 |
| git revision | master / 2394e40 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| modules | C: . itests itests/hive-unit itests/util ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-11922/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Fix QTestUtil session lifecycle
> ---
>
> Key: HIVE-19882
> URL: 

[jira] [Commented] (HIVE-18916) SparkClientImpl doesn't error out if spark-submit fails

2018-06-19 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517083#comment-16517083
 ] 

Sahil Takiar commented on HIVE-18916:
-

[~aihuaxu] addressed comments.

> SparkClientImpl doesn't error out if spark-submit fails
> ---
>
> Key: HIVE-18916
> URL: https://issues.apache.org/jira/browse/HIVE-18916
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18916.1.WIP.patch, HIVE-18916.2.patch, 
> HIVE-18916.3.patch, HIVE-18916.4.patch
>
>
> If {{spark-submit}} returns a non-zero exit code, {{SparkClientImpl}} will 
> simply log the exit code, but won't throw an error. Eventually, the 
> connection timeout will get triggered and an exception like {{Timed out 
> waiting for client connection}} will be logged, which is pretty misleading.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19882) Fix QTestUtil session lifecycle

2018-06-19 Thread Zoltan Haindrich (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-19882:

Attachment: HIVE-19882.07.patch

> Fix QTestUtil session lifecycle
> ---
>
> Key: HIVE-19882
> URL: https://issues.apache.org/jira/browse/HIVE-19882
> Project: Hive
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-19882.01.patch, HIVE-19882.02.patch, 
> HIVE-19882.03.patch, HIVE-19882.04.patch, HIVE-19882.05.patch, 
> HIVE-19882.06.patch, HIVE-19882.07.patch
>
>
> there are a number of strange come and go failing tests; it was always 
> strange to me that qtestutil cleans up at some questionable points - this 
> seems to be leading to executing some commands with the previous qfiles 
> session...
> ideally the session/etc should start/reused in {{before}}
> and it should be closed in {{after}}
> seems like configuration is handled probably incorrectly; saving the conf 
> after initialization - and restoring it for a new session should ensure 
> consistency



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19847) Create Separate getInputSummary Service

2018-06-19 Thread Yongzhi Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517056#comment-16517056
 ] 

Yongzhi Chen commented on HIVE-19847:
-

1. Will you shutdown the executorService when one query is cancelled ? Will it 
affect other queries?
2. How will the system recover from interruption caused executorService 
shutdown? 

> Create Separate getInputSummary Service
> ---
>
> Key: HIVE-19847
> URL: https://issues.apache.org/jira/browse/HIVE-19847
> Project: Hive
>  Issue Type: Improvement
>  Components: HiveServer2
>Affects Versions: 3.0.0, 4.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Major
> Attachments: HIVE-19847.1.patch, HIVE-19847.2.patch
>
>
> The Hive {{org.apache.hadoop.hive.ql.exec.Utilities.java}} file has taken on 
> a life of its own.  We should consider separating out the various components 
> into their own classes.  For this ticket, I propose separating out the 
> {{getInputSummary}} functionality into its own class.
> There are several issues with the current implementation:
> # It is 
> [synchronized|https://github.com/apache/hive/blob/f27c38ff55902827499192a4f8cf8ed37d6fd967/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java#L2383].
>   Only one query can get file input summary at a time.  For a query which 
> deals with a large data set with a large number of files, this can block 
> other queries for a long period of time.  This is especially painful when 
> most queries use a small data set, but a large data set is submitted on 
> occasion.
> # For each query, time is spend setting up and tearing down a ThreadPool
> # It uses deprecated code
> I propose breaking it out into its own class and creating a single thread 
> pool that all queries pull from.  In this way, the bottle neck will be one 
> the number of available threads, not on a single query and if a big query is 
> running and a small query is also submitted, the smaller query will be able 
> to proceed.
> In regards to setup/teardown... if a query uses 15 threads to perform this 
> summary action, then finishes, it will tear down the threads, the next query 
> may immediate create 15 new threads for processing.  With a single pool, 
> those threads are never performing tear down and setup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18725) Improve error handling for subqueries if there is wrong column reference

2018-06-19 Thread Igor Kryvenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Kryvenko updated HIVE-18725:
-
Attachment: HIVE-18725.07.patch

> Improve error handling for subqueries if there is wrong column reference
> 
>
> Key: HIVE-18725
> URL: https://issues.apache.org/jira/browse/HIVE-18725
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Reporter: Vineet Garg
>Assignee: Igor Kryvenko
>Priority: Major
> Attachments: HIVE-18725.01.patch, HIVE-18725.02.patch, 
> HIVE-18725.03.patch, HIVE-18725.04.patch, HIVE-18725.05.patch, 
> HIVE-18725.06.patch, HIVE-18725.07.patch
>
>
> If there is a column reference within subquery which doesn't exist Hive 
> throws misleading error message.
> e.g. 
> {code:sql}
> select * from table1 where table1.col1 IN (select col2 from table2 where 
> table2.col1=table1.non_existing_column) and table1.col1 IN (select 4);
> {code}
> The above query, assuming table1 doesn't have non_existing_column, will throw 
> following misleading error:
> {noformat}
> FAILED: SemanticException Line 0:-1 Unsupported SubQuery Expression 'col1': 
> Only 1 SubQuery expression is supported.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18725) Improve error handling for subqueries if there is wrong column reference

2018-06-19 Thread Igor Kryvenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Kryvenko updated HIVE-18725:
-
Attachment: (was: HIVE-18275.07.patch)

> Improve error handling for subqueries if there is wrong column reference
> 
>
> Key: HIVE-18725
> URL: https://issues.apache.org/jira/browse/HIVE-18725
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Reporter: Vineet Garg
>Assignee: Igor Kryvenko
>Priority: Major
> Attachments: HIVE-18725.01.patch, HIVE-18725.02.patch, 
> HIVE-18725.03.patch, HIVE-18725.04.patch, HIVE-18725.05.patch, 
> HIVE-18725.06.patch, HIVE-18725.07.patch
>
>
> If there is a column reference within subquery which doesn't exist Hive 
> throws misleading error message.
> e.g. 
> {code:sql}
> select * from table1 where table1.col1 IN (select col2 from table2 where 
> table2.col1=table1.non_existing_column) and table1.col1 IN (select 4);
> {code}
> The above query, assuming table1 doesn't have non_existing_column, will throw 
> following misleading error:
> {noformat}
> FAILED: SemanticException Line 0:-1 Unsupported SubQuery Expression 'col1': 
> Only 1 SubQuery expression is supported.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18725) Improve error handling for subqueries if there is wrong column reference

2018-06-19 Thread Igor Kryvenko (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Igor Kryvenko updated HIVE-18725:
-
Attachment: HIVE-18275.07.patch

> Improve error handling for subqueries if there is wrong column reference
> 
>
> Key: HIVE-18725
> URL: https://issues.apache.org/jira/browse/HIVE-18725
> Project: Hive
>  Issue Type: Improvement
>  Components: Query Planning
>Reporter: Vineet Garg
>Assignee: Igor Kryvenko
>Priority: Major
> Attachments: HIVE-18275.07.patch, HIVE-18725.01.patch, 
> HIVE-18725.02.patch, HIVE-18725.03.patch, HIVE-18725.04.patch, 
> HIVE-18725.05.patch, HIVE-18725.06.patch
>
>
> If there is a column reference within subquery which doesn't exist Hive 
> throws misleading error message.
> e.g. 
> {code:sql}
> select * from table1 where table1.col1 IN (select col2 from table2 where 
> table2.col1=table1.non_existing_column) and table1.col1 IN (select 4);
> {code}
> The above query, assuming table1 doesn't have non_existing_column, will throw 
> following misleading error:
> {noformat}
> FAILED: SemanticException Line 0:-1 Unsupported SubQuery Expression 'col1': 
> Only 1 SubQuery expression is supported.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19946) VectorizedRowBatchCtx.recordIdColumnVector cannot be shared between different JVMs

2018-06-19 Thread Teddy Choi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Teddy Choi updated HIVE-19946:
--
Status: Patch Available  (was: Open)

> VectorizedRowBatchCtx.recordIdColumnVector cannot be shared between different 
> JVMs
> --
>
> Key: HIVE-19946
> URL: https://issues.apache.org/jira/browse/HIVE-19946
> Project: Hive
>  Issue Type: Bug
>Reporter: Teddy Choi
>Assignee: Teddy Choi
>Priority: Major
> Attachments: HIVE-19946.1.patch
>
>
> VectorizedRowBatchCtx.recordIdColumnVector was used temporarily to pass 
> record id column, which is virtual, between a reducer and a mapper. However, 
> when the reducer and the mapper are not in a same JVM, it makes incorrect 
> results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19946) VectorizedRowBatchCtx.recordIdColumnVector cannot be shared between different JVMs

2018-06-19 Thread Teddy Choi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Teddy Choi updated HIVE-19946:
--
Attachment: HIVE-19946.1.patch

> VectorizedRowBatchCtx.recordIdColumnVector cannot be shared between different 
> JVMs
> --
>
> Key: HIVE-19946
> URL: https://issues.apache.org/jira/browse/HIVE-19946
> Project: Hive
>  Issue Type: Bug
>Reporter: Teddy Choi
>Assignee: Teddy Choi
>Priority: Major
> Attachments: HIVE-19946.1.patch
>
>
> VectorizedRowBatchCtx.recordIdColumnVector was used temporarily to pass 
> record id column, which is virtual, between a reducer and a mapper. However, 
> when the reducer and the mapper are not in a same JVM, it makes incorrect 
> results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19946) VectorizedRowBatchCtx.recordIdColumnVector cannot be shared between different JVMs

2018-06-19 Thread Teddy Choi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Teddy Choi reassigned HIVE-19946:
-


> VectorizedRowBatchCtx.recordIdColumnVector cannot be shared between different 
> JVMs
> --
>
> Key: HIVE-19946
> URL: https://issues.apache.org/jira/browse/HIVE-19946
> Project: Hive
>  Issue Type: Bug
>Reporter: Teddy Choi
>Assignee: Teddy Choi
>Priority: Major
>
> VectorizedRowBatchCtx.recordIdColumnVector was used temporarily to pass 
> record id column, which is virtual, between a reducer and a mapper. However, 
> when the reducer and the mapper are not in a same JVM, it makes incorrect 
> results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19945) Beeline - run against a different sql engine

2018-06-19 Thread Laszlo Bodor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Bodor updated HIVE-19945:

Description: 
Original idea by [~kgyrtkirk] 
"I think beeline also support to load different sql drivers (not sure about 
this...but I think I saw some pointers to this) Anyway...it would be great to 
be able to execute a test against a different sql engine; like psql."

something like:
{code}
mvn test -Dtest=TestQ -Dqengine=ExternalPSQL 
-Djdbc.uri=psql://localhost:5432/somedb
{code}

> Beeline - run against a different sql engine
> 
>
> Key: HIVE-19945
> URL: https://issues.apache.org/jira/browse/HIVE-19945
> Project: Hive
>  Issue Type: Improvement
>  Components: Beeline
>Affects Versions: 3.0.0
>Reporter: Laszlo Bodor
>Priority: Major
> Fix For: 4.0.0
>
>
> Original idea by [~kgyrtkirk] 
> "I think beeline also support to load different sql drivers (not sure about 
> this...but I think I saw some pointers to this) Anyway...it would be great to 
> be able to execute a test against a different sql engine; like psql."
> something like:
> {code}
> mvn test -Dtest=TestQ -Dqengine=ExternalPSQL 
> -Djdbc.uri=psql://localhost:5432/somedb
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19945) Beeline - run against a different sql engine

2018-06-19 Thread Laszlo Bodor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Bodor reassigned HIVE-19945:
---

Assignee: Laszlo Bodor

> Beeline - run against a different sql engine
> 
>
> Key: HIVE-19945
> URL: https://issues.apache.org/jira/browse/HIVE-19945
> Project: Hive
>  Issue Type: Improvement
>  Components: Beeline
>Affects Versions: 3.0.0
>Reporter: Laszlo Bodor
>Assignee: Laszlo Bodor
>Priority: Major
> Fix For: 4.0.0
>
>
> Original idea by [~kgyrtkirk] 
> "I think beeline also support to load different sql drivers (not sure about 
> this...but I think I saw some pointers to this) Anyway...it would be great to 
> be able to execute a test against a different sql engine; like psql."
> something like:
> {code}
> mvn test -Dtest=TestQ -Dqengine=ExternalPSQL 
> -Djdbc.uri=psql://localhost:5432/somedb
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19921) Fix perf duration and queue name in HiveProtoLoggingHook

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16517021#comment-16517021
 ] 

Hive QA commented on HIVE-19921:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12928208/HIVE-19921.01-branch-3.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 14348 tests 
executed
*Failed tests:*
{noformat}
TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=256)
TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=256)
TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=256)
TestMiniDruidKafkaCliDriver - did not produce a TEST-*.xml file (likely timed 
out) (batchId=256)
TestTezPerfCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=256)
org.apache.hadoop.hive.ql.TestWarehouseExternalDir.testManagedPaths 
(batchId=233)
org.apache.hive.service.TestHS2ImpersonationWithRemoteMS.testImpersonation 
(batchId=242)
org.apache.hive.spark.client.rpc.TestRpc.testServerPort (batchId=308)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/11920/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11920/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11920/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 8 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12928208 - PreCommit-HIVE-Build

> Fix perf duration and queue name in HiveProtoLoggingHook
> 
>
> Key: HIVE-19921
> URL: https://issues.apache.org/jira/browse/HIVE-19921
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Harish Jaiprakash
>Assignee: Harish Jaiprakash
>Priority: Major
> Attachments: HIVE-19921.01-branch-3.patch, HIVE-19921.01.patch
>
>
> The perf log should return duration instead of end time.
> The queue name should be llap queue for llap queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-16255) Support percentile_cont / percentile_disc

2018-06-19 Thread Laszlo Bodor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-16255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Bodor reassigned HIVE-16255:
---

Assignee: Laszlo Bodor

> Support percentile_cont / percentile_disc
> -
>
> Key: HIVE-16255
> URL: https://issues.apache.org/jira/browse/HIVE-16255
> Project: Hive
>  Issue Type: Sub-task
>  Components: SQL
>Reporter: Carter Shanklin
>Assignee: Laszlo Bodor
>Priority: Major
>
> Way back in HIVE-259, a percentile function was added that provides a subset 
> of the standard percentile_cont aggregate function.
> The SQL standard provides some additional options and also a percentile_disc 
> aggregate function with different rules. In the standard you specify an 
> ordering with arbitrary value expression and the results are drawn from this 
> value expression. This aggregate functions should be usable as analytic 
> functions as well (i.e. support the over clause). The current percentile 
> function is able to be used with an over clause.
> The rough outline of how this works is:
> percentile_cont(number) within group (order by expression) [ over(window 
> spec) ]
> percentile_disc(number) within group (order by expression) [ over(window 
> spec) ]
> The value of number should be between 0 and 1. The value expression is 
> evaluated for each row of the group, nulls are discarded, and the remaining 
> rows are ordered.
> — If PERCENTILE_CONT is specified, by considering the pair of consecutive 
> rows that are indicated by the argument, treated as a fraction of the total 
> number of rows in the group, and interpolating the value of the value 
> expression evaluated for these rows.
> — If PERCENTILE_DISC is specified, by treating the group as a window 
> partition of the CUME_DIST window function, using the specified ordering of 
> the value expression as the window ordering, and returning the  first value 
> expression whose cumulative distribution value is greater than or equal to 
> the argument.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19922) TestMiniDruidKafkaCliDriver[druidkafkamini_basic] is flaky

2018-06-19 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-19922:
--
Attachment: HIVE-19922.3.patch

> TestMiniDruidKafkaCliDriver[druidkafkamini_basic] is flaky
> --
>
> Key: HIVE-19922
> URL: https://issues.apache.org/jira/browse/HIVE-19922
> Project: Hive
>  Issue Type: Bug
>Reporter: Peter Vary
>Priority: Major
> Attachments: HIVE-19922.2.patch, HIVE-19922.3.patch, HIVE-19922.patch
>
>
> Consistently failing in the last 4 runs.
> See:
> [https://builds.apache.org/job/PreCommit-HIVE-Build/11824/testReport/org.apache.hadoop.hive.cli/TestMiniDruidKafkaCliDriver/testCliDriver_druidkafkamini_basic_/history/]
> Can not reproduce the failure locally :(
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-15980) Support the all set quantifier

2018-06-19 Thread Laszlo Bodor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-15980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Bodor resolved HIVE-15980.
-
Resolution: Duplicate

it seems it has been resolved with HIVE-16064

> Support the all set quantifier
> --
>
> Key: HIVE-15980
> URL: https://issues.apache.org/jira/browse/HIVE-15980
> Project: Hive
>  Issue Type: Sub-task
>  Components: SQL
>Reporter: Carter Shanklin
>Assignee: Laszlo Bodor
>Priority: Major
>
> SQL defines all and distinct as set quantifiers. All is often omitted. For 
> example, instead of sum(x) SQL standard allows sum(all x), which is 
> equivalent. SQL reference: section 10.9



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19783) Retrieve only locations in HiveMetaStore.dropPartitionsAndGetLocations

2018-06-19 Thread Peter Vary (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-19783:
--
Attachment: HIVE-19783.4.patch

> Retrieve only locations in HiveMetaStore.dropPartitionsAndGetLocations
> --
>
> Key: HIVE-19783
> URL: https://issues.apache.org/jira/browse/HIVE-19783
> Project: Hive
>  Issue Type: Improvement
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-19783.2.patch, HIVE-19783.4.patch, HIVE-19783.patch
>
>
> Optimize further the dropTable command.
> Currently {{HiveMetaStore.dropPartitionsAndGetLocations}} retrieves the whole 
> partition object, but we need only the locations instead.
> Create a RawStore method to retrieve only the locations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18916) SparkClientImpl doesn't error out if spark-submit fails

2018-06-19 Thread Sahil Takiar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-18916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sahil Takiar updated HIVE-18916:

Attachment: HIVE-18916.4.patch

> SparkClientImpl doesn't error out if spark-submit fails
> ---
>
> Key: HIVE-18916
> URL: https://issues.apache.org/jira/browse/HIVE-18916
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18916.1.WIP.patch, HIVE-18916.2.patch, 
> HIVE-18916.3.patch, HIVE-18916.4.patch
>
>
> If {{spark-submit}} returns a non-zero exit code, {{SparkClientImpl}} will 
> simply log the exit code, but won't throw an error. Eventually, the 
> connection timeout will get triggered and an exception like {{Timed out 
> waiting for client connection}} will be logged, which is pretty misleading.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19921) Fix perf duration and queue name in HiveProtoLoggingHook

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16516949#comment-16516949
 ] 

Hive QA commented on HIVE-19921:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 11s{color} 
| {color:red} 
/data/hiveptest/logs/PreCommit-HIVE-Build-11920/patches/PreCommit-HIVE-Build-11920.patch
 does not apply to master. Rebase required? Wrong Branch? See 
http://cwiki.apache.org/confluence/display/Hive/HowToContribute for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-11920/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Fix perf duration and queue name in HiveProtoLoggingHook
> 
>
> Key: HIVE-19921
> URL: https://issues.apache.org/jira/browse/HIVE-19921
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2
>Reporter: Harish Jaiprakash
>Assignee: Harish Jaiprakash
>Priority: Major
> Attachments: HIVE-19921.01-branch-3.patch, HIVE-19921.01.patch
>
>
> The perf log should return duration instead of end time.
> The queue name should be llap queue for llap queries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19922) TestMiniDruidKafkaCliDriver[druidkafkamini_basic] is flaky

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16516943#comment-16516943
 ] 

Hive QA commented on HIVE-19922:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12928195/HIVE-19922.2.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/11918/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11918/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11918/

Messages:
{noformat}
 This message was trimmed, see log for full details 
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/eclipse/jetty/jetty-xml/9.3.20.v20170531/jetty-xml-9.3.20.v20170531.jar(org/eclipse/jetty/xml/XmlConfiguration.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/slf4j/jul-to-slf4j/1.7.10/jul-to-slf4j-1.7.10.jar(org/slf4j/bridge/SLF4JBridgeHandler.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/DispatcherType.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/Filter.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/FilterChain.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/FilterConfig.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/ServletException.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/ServletRequest.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/ServletResponse.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/annotation/WebFilter.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/http/HttpServletRequest.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/javax/servlet/javax.servlet-api/3.1.0/javax.servlet-api-3.1.0.jar(javax/servlet/http/HttpServletResponse.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/classification/target/hive-classification-4.0.0-SNAPSHOT.jar(org/apache/hadoop/hive/common/classification/InterfaceAudience$LimitedPrivate.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/classification/target/hive-classification-4.0.0-SNAPSHOT.jar(org/apache/hadoop/hive/common/classification/InterfaceStability$Unstable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/ByteArrayOutputStream.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/OutputStream.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/Closeable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/lang/AutoCloseable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/io/Flushable.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(javax/xml/bind/annotation/XmlRootElement.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/commons/commons-exec/1.1/commons-exec-1.1.jar(org/apache/commons/exec/ExecuteException.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/security/PrivilegedExceptionAction.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/concurrent/ExecutionException.class)]]
[loading 
ZipFileIndexFileObject[/usr/lib/jvm/java-8-openjdk-amd64/jre/lib/rt.jar(java/util/concurrent/TimeoutException.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/maven/org/apache/hadoop/hadoop-common/3.1.0/hadoop-common-3.1.0.jar(org/apache/hadoop/fs/FileSystem.class)]]
[loading 
ZipFileIndexFileObject[/data/hiveptest/working/apache-github-source-source/shims/common/target/hive-shims-common-4.0.0-SNAPSHOT.jar(org/apache/hadoop/hive/shims/HadoopShimsSecure.class)]]
[loading 

[jira] [Commented] (HIVE-18140) Partitioned tables statistics can go wrong in basic stats mixed case

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16516938#comment-16516938
 ] 

Hive QA commented on HIVE-18140:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12928177/HIVE-18140.03.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:green}SUCCESS:{color} +1 due to 14535 tests passed

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/11917/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11917/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11917/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12928177 - PreCommit-HIVE-Build

> Partitioned tables statistics can go wrong in basic stats mixed case
> 
>
> Key: HIVE-18140
> URL: https://issues.apache.org/jira/browse/HIVE-18140
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-18140.01.patch, HIVE-18140.01wip01.patch, 
> HIVE-18140.01wip03.patch, HIVE-18140.01wip04.patch, HIVE-18140.02.patch, 
> HIVE-18140.02wip01.patch, HIVE-18140.03.patch, HIVE-19140.02wip02.patch, 
> HIVE-19727.02wip03.patch
>
>
> suppose the following scenario:
> * part1 has basic stats {{RC=10,DS=1K}}
> * all other partition has no basic stats (and a bunch of rows)
> then 
> [this|https://github.com/apache/hive/blob/d9924ab3e285536f7e2cc15ecbea36a78c59c66d/ql/src/java/org/apache/hadoop/hive/ql/stats/StatsUtils.java#L378]
>  condition would be false; which in turn produces estimations for the whole 
> partitioned table: {{RC=10,DS=1K}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18140) Partitioned tables statistics can go wrong in basic stats mixed case

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-18140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16516915#comment-16516915
 ] 

Hive QA commented on HIVE-18140:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  
8s{color} | {color:blue} ql in master has 2280 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  
9s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
37s{color} | {color:red} ql: The patch generated 18 new + 80 unchanged - 5 
fixed = 98 total (was 85) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
13s{color} | {color:red} ql generated 1 new + 2280 unchanged - 0 fixed = 2281 
total (was 2280) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
54s{color} | {color:red} ql generated 1 new + 99 unchanged - 1 fixed = 100 
total (was 100) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
12s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 25m 41s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:ql |
|  |  Dead store to results in 
org.apache.hadoop.hive.ql.stats.StatsUtils.getNumRows(HiveConf, List, Table, 
PrunedPartitionList, AtomicInteger)  At 
StatsUtils.java:org.apache.hadoop.hive.ql.stats.StatsUtils.getNumRows(HiveConf, 
List, Table, PrunedPartitionList, AtomicInteger)  At StatsUtils.java:[line 189] 
|
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  findbugs  
checkstyle  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-11917/dev-support/hive-personality.sh
 |
| git revision | master / 2394e40 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-11917/yetus/diff-checkstyle-ql.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-11917/yetus/new-findbugs-ql.html
 |
| javadoc | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-11917/yetus/diff-javadoc-javadoc-ql.txt
 |
| modules | C: itests/qtest ql U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-11917/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Partitioned tables statistics can go wrong in basic stats mixed case
> 
>
> Key: HIVE-18140
> URL: https://issues.apache.org/jira/browse/HIVE-18140
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: 

[jira] [Assigned] (HIVE-19944) Investigate and fix version mismatch of GCP

2018-06-19 Thread Adam Szita (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Szita reassigned HIVE-19944:
-


> Investigate and fix version mismatch of GCP
> ---
>
> Key: HIVE-19944
> URL: https://issues.apache.org/jira/browse/HIVE-19944
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
>
> We've observed that adding a new image to the ptest GCP project breaks our 
> currently working infrastructure when we try to restart the hive ptest server.
> This is because upon initialization the project's images are queried and we 
> immediately get an exception for newly added images - they don't have a field 
> that our client thinks should be mandatory to have. I believe there's an 
> upgrade needed on our side for the GCP libs we depend on.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19880) Repl Load to return recoverable vs non-recoverable error codes

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16516892#comment-16516892
 ] 

Hive QA commented on HIVE-19880:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12928178/HIVE-19880.04-branch-3.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 14350 tests 
executed
*Failed tests:*
{noformat}
TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=256)
TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=256)
TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=256)
TestMiniDruidKafkaCliDriver - did not produce a TEST-*.xml file (likely timed 
out) (batchId=256)
TestTezPerfCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=256)
org.apache.hadoop.hive.ql.TestWarehouseExternalDir.testManagedPaths 
(batchId=233)
org.apache.hive.service.TestHS2ImpersonationWithRemoteMS.testImpersonation 
(batchId=242)
org.apache.hive.spark.client.rpc.TestRpc.testServerPort (batchId=308)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/11916/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11916/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11916/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 8 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12928178 - PreCommit-HIVE-Build

> Repl Load to return recoverable vs non-recoverable error codes
> --
>
> Key: HIVE-19880
> URL: https://issues.apache.org/jira/browse/HIVE-19880
> Project: Hive
>  Issue Type: Task
>  Components: repl
>Affects Versions: 3.1.0, 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-19880.01.patch, HIVE-19880.04-branch-3.patch, 
> HIVE-19880.04.patch
>
>
> To enable bootstrap of large databases, application has to have the ability 
> to keep retrying the bootstrap load till it encounters a fatal error. The 
> ability to identify if an error is fatal or not will be decided by hive and 
> communication of the same will happen to application via error codes.
> So there should be different error codes for recoverable vs non-recoverable 
> failures which should be propagated to application as part of running the 
> repl load command.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19016) Vectorization and Parquet: When vectorized, parquet_nested_complex.q produces RuntimeException: Unsupported type used

2018-06-19 Thread Matt McCline (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16516871#comment-16516871
 ] 

Matt McCline commented on HIVE-19016:
-

Adding full nested support for complex types is "complex" to say the least.

For now, just disabling vectorization of PARQUET when nested complex types 
detected.

> Vectorization and Parquet: When vectorized, parquet_nested_complex.q produces 
> RuntimeException: Unsupported type used
> -
>
> Key: HIVE-19016
> URL: https://issues.apache.org/jira/browse/HIVE-19016
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-19016.01.patch
>
>
> Adding "SET hive.vectorized.execution.enabled=true;" to 
> parquet_nested_complex.q triggers this call stack:
> {noformat}
> Caused by: java.lang.RuntimeException: Unsupported type used in 
> list:array>
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkListColumnSupport(VectorizedParquetRecordReader.java:589)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.buildVectorizedParquetReader(VectorizedParquetRecordReader.java:525)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:440)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:401)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:353)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:92)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:360)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> {noformat}
> FYI: [~vihangk1]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19016) Vectorization and Parquet: When vectorized, parquet_nested_complex.q produces RuntimeException: Unsupported type used

2018-06-19 Thread Matt McCline (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline updated HIVE-19016:

Attachment: HIVE-19016.01.patch

> Vectorization and Parquet: When vectorized, parquet_nested_complex.q produces 
> RuntimeException: Unsupported type used
> -
>
> Key: HIVE-19016
> URL: https://issues.apache.org/jira/browse/HIVE-19016
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
> Attachments: HIVE-19016.01.patch
>
>
> Adding "SET hive.vectorized.execution.enabled=true;" to 
> parquet_nested_complex.q triggers this call stack:
> {noformat}
> Caused by: java.lang.RuntimeException: Unsupported type used in 
> list:array>
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkListColumnSupport(VectorizedParquetRecordReader.java:589)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.buildVectorizedParquetReader(VectorizedParquetRecordReader.java:525)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:440)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:401)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:353)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:92)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:360)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> {noformat}
> FYI: [~vihangk1]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19016) Vectorization and Parquet: When vectorized, parquet_nested_complex.q produces RuntimeException: Unsupported type used

2018-06-19 Thread Matt McCline (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-19016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt McCline reassigned HIVE-19016:
---

Assignee: Matt McCline  (was: Haifeng Chen)

> Vectorization and Parquet: When vectorized, parquet_nested_complex.q produces 
> RuntimeException: Unsupported type used
> -
>
> Key: HIVE-19016
> URL: https://issues.apache.org/jira/browse/HIVE-19016
> Project: Hive
>  Issue Type: Bug
>  Components: Hive
>Affects Versions: 3.0.0
>Reporter: Matt McCline
>Assignee: Matt McCline
>Priority: Critical
>
> Adding "SET hive.vectorized.execution.enabled=true;" to 
> parquet_nested_complex.q triggers this call stack:
> {noformat}
> Caused by: java.lang.RuntimeException: Unsupported type used in 
> list:array>
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkListColumnSupport(VectorizedParquetRecordReader.java:589)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.buildVectorizedParquetReader(VectorizedParquetRecordReader.java:525)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.checkEndOfRowGroup(VectorizedParquetRecordReader.java:440)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.nextBatch(VectorizedParquetRecordReader.java:401)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:353)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.parquet.vector.VectorizedParquetRecordReader.next(VectorizedParquetRecordReader.java:92)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
>   at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:360)
>  ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT]
> {noformat}
> FYI: [~vihangk1]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17852) remove support for list bucketing "stored as directories" in 3.0

2018-06-19 Thread Laszlo Bodor (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16516860#comment-16516860
 ] 

Laszlo Bodor commented on HIVE-17852:
-

failures are unrelated, passed locally, uploading 10.patch for retriggering 
tests

> remove support for list bucketing "stored as directories" in 3.0
> 
>
> Key: HIVE-17852
> URL: https://issues.apache.org/jira/browse/HIVE-17852
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Laszlo Bodor
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-17852.01.patch, HIVE-17852.02.patch, 
> HIVE-17852.03.patch, HIVE-17852.04.patch, HIVE-17852.05.patch, 
> HIVE-17852.06.patch, HIVE-17852.07.patch, HIVE-17852.08.patch, 
> HIVE-17852.09.patch, HIVE-17852.10.patch
>
>
> From the email thread:
> 1) LB, when stored as directories, adds a lot of low-level complexity to Hive 
> tables that has to be accounted for in many places in the code where the 
> files are written or modified - from FSOP to ACID/replication/export.
> 2) While working on some FSOP code I noticed that some of that logic is 
> broken - e.g. the duplicate file removal from tasks, a pretty fundamental 
> correctness feature in Hive, may be broken. LB also doesn’t appear to be 
> compatible with e.g. regular bucketing.
> 3) The feature hasn’t seen development activity in a while; it also doesn’t 
> appear to be used a lot.
> Keeping with the theme of cleaning up “legacy” code for 3.0, I was proposing 
> we remove it.
> (2) also suggested that, if needed, it might be easier to implement similar 
> functionality by adding some flexibility to partitions (which LB directories 
> look like anyway); that would also keep the logic on a higher level of 
> abstraction (split generation, partition pruning) as opposed to many 
> low-level places like FSOP, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17852) remove support for list bucketing "stored as directories" in 3.0

2018-06-19 Thread Laszlo Bodor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Bodor updated HIVE-17852:

Attachment: HIVE-17852.10.patch

> remove support for list bucketing "stored as directories" in 3.0
> 
>
> Key: HIVE-17852
> URL: https://issues.apache.org/jira/browse/HIVE-17852
> Project: Hive
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Laszlo Bodor
>Priority: Major
> Fix For: 4.0.0
>
> Attachments: HIVE-17852.01.patch, HIVE-17852.02.patch, 
> HIVE-17852.03.patch, HIVE-17852.04.patch, HIVE-17852.05.patch, 
> HIVE-17852.06.patch, HIVE-17852.07.patch, HIVE-17852.08.patch, 
> HIVE-17852.09.patch, HIVE-17852.10.patch
>
>
> From the email thread:
> 1) LB, when stored as directories, adds a lot of low-level complexity to Hive 
> tables that has to be accounted for in many places in the code where the 
> files are written or modified - from FSOP to ACID/replication/export.
> 2) While working on some FSOP code I noticed that some of that logic is 
> broken - e.g. the duplicate file removal from tasks, a pretty fundamental 
> correctness feature in Hive, may be broken. LB also doesn’t appear to be 
> compatible with e.g. regular bucketing.
> 3) The feature hasn’t seen development activity in a while; it also doesn’t 
> appear to be used a lot.
> Keeping with the theme of cleaning up “legacy” code for 3.0, I was proposing 
> we remove it.
> (2) also suggested that, if needed, it might be easier to implement similar 
> functionality by adding some flexibility to partitions (which LB directories 
> look like anyway); that would also keep the logic on a higher level of 
> abstraction (split generation, partition pruning) as opposed to many 
> low-level places like FSOP, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19569) alter table db1.t1 rename db2.t2 generates MetaStoreEventListener.onDropTable()

2018-06-19 Thread Peter Vary (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16516814#comment-16516814
 ] 

Peter Vary commented on HIVE-19569:
---

[~maheshk114]: Thanks! Ping me if you need review :D

> alter table db1.t1 rename db2.t2 generates 
> MetaStoreEventListener.onDropTable()
> ---
>
> Key: HIVE-19569
> URL: https://issues.apache.org/jira/browse/HIVE-19569
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore, Standalone Metastore, Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.1.0, 4.0.0
>
> Attachments: HIVE-19569.01-branch-3.patch, HIVE-19569.01.patch, 
> HIVE-19569.02.patch, HIVE-19569.03.patch, HIVE-19569.04.patch
>
>
> When renaming a table within the same DB, this operation causes 
> {{MetaStoreEventListener.onAlterTable()}} to fire but when changing DB name 
> for a table it causes {{MetaStoreEventListener.onDropTable()}} + 
> {{MetaStoreEventListener.onCreateTable()}}.
> The files from original table are moved to new table location.  
> This creates confusing semantics since any logic in {{onDropTable()}} doesn't 
> know about the larger context, i.e. that there will be a matching 
> {{onCreateTable()}}.
> In particular, this causes a problem for Acid tables since files moved from 
> old table use WriteIDs that are not meaningful with the context of new table.
> Current implementation is due to replication.  This should ideally be changed 
> to raise a "not supported" error for tables that are marked for replication.
> cc [~sankarh]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19569) alter table db1.t1 rename db2.t2 generates MetaStoreEventListener.onDropTable()

2018-06-19 Thread mahesh kumar behera (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16516812#comment-16516812
 ] 

mahesh kumar behera commented on HIVE-19569:


[~pvary]

Sure, will add a followup Jira to keep the exception same as before 

> alter table db1.t1 rename db2.t2 generates 
> MetaStoreEventListener.onDropTable()
> ---
>
> Key: HIVE-19569
> URL: https://issues.apache.org/jira/browse/HIVE-19569
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore, Standalone Metastore, Transactions
>Affects Versions: 3.0.0
>Reporter: Eugene Koifman
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.1.0, 4.0.0
>
> Attachments: HIVE-19569.01-branch-3.patch, HIVE-19569.01.patch, 
> HIVE-19569.02.patch, HIVE-19569.03.patch, HIVE-19569.04.patch
>
>
> When renaming a table within the same DB, this operation causes 
> {{MetaStoreEventListener.onAlterTable()}} to fire but when changing DB name 
> for a table it causes {{MetaStoreEventListener.onDropTable()}} + 
> {{MetaStoreEventListener.onCreateTable()}}.
> The files from original table are moved to new table location.  
> This creates confusing semantics since any logic in {{onDropTable()}} doesn't 
> know about the larger context, i.e. that there will be a matching 
> {{onCreateTable()}}.
> In particular, this causes a problem for Acid tables since files moved from 
> old table use WriteIDs that are not meaningful with the context of new table.
> Current implementation is due to replication.  This should ideally be changed 
> to raise a "not supported" error for tables that are marked for replication.
> cc [~sankarh]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19880) Repl Load to return recoverable vs non-recoverable error codes

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16516802#comment-16516802
 ] 

Hive QA commented on HIVE-19880:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 15s{color} 
| {color:red} 
/data/hiveptest/logs/PreCommit-HIVE-Build-11916/patches/PreCommit-HIVE-Build-11916.patch
 does not apply to master. Rebase required? Wrong Branch? See 
http://cwiki.apache.org/confluence/display/Hive/HowToContribute for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-11916/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Repl Load to return recoverable vs non-recoverable error codes
> --
>
> Key: HIVE-19880
> URL: https://issues.apache.org/jira/browse/HIVE-19880
> Project: Hive
>  Issue Type: Task
>  Components: repl
>Affects Versions: 3.1.0, 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-19880.01.patch, HIVE-19880.04-branch-3.patch, 
> HIVE-19880.04.patch
>
>
> To enable bootstrap of large databases, application has to have the ability 
> to keep retrying the bootstrap load till it encounters a fatal error. The 
> ability to identify if an error is fatal or not will be decided by hive and 
> communication of the same will happen to application via error codes.
> So there should be different error codes for recoverable vs non-recoverable 
> failures which should be propagated to application as part of running the 
> repl load command.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19725) Add ability to dump non-native tables in replication metadata dump

2018-06-19 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16516799#comment-16516799
 ] 

Hive QA commented on HIVE-19725:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12928181/HIVE-19725.07-branch-3.patch

{color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 8 failed/errored test(s), 14350 tests 
executed
*Failed tests:*
{noformat}
TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=256)
TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=256)
TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=256)
TestMiniDruidKafkaCliDriver - did not produce a TEST-*.xml file (likely timed 
out) (batchId=256)
TestTezPerfCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=256)
org.apache.hadoop.hive.ql.TestWarehouseExternalDir.testManagedPaths 
(batchId=233)
org.apache.hive.service.TestHS2ImpersonationWithRemoteMS.testImpersonation 
(batchId=242)
org.apache.hive.spark.client.rpc.TestRpc.testServerPort (batchId=308)
{noformat}

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/11915/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/11915/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-11915/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Executing org.apache.hive.ptest.execution.YetusPhase
Executing org.apache.hive.ptest.execution.ExecutionPhase
Executing org.apache.hive.ptest.execution.ReportingPhase
Tests exited with: TestsFailedException: 8 tests failed
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12928181 - PreCommit-HIVE-Build

> Add ability to dump non-native tables in replication metadata dump
> --
>
> Key: HIVE-19725
> URL: https://issues.apache.org/jira/browse/HIVE-19725
> Project: Hive
>  Issue Type: Task
>  Components: repl
>Affects Versions: 3.1.0, 4.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: Repl, pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-19725.01.patch, HIVE-19725.02.patch, 
> HIVE-19725.03.patch, HIVE-19725.04.patch, HIVE-19725.05.patch, 
> HIVE-19725.06-branch-3.patch, HIVE-19725.07-branch-3.patch, 
> HIVE-19725.07.patch
>
>
> if hive.repl.dump.metadata.only is set to true, allow dumping non native 
> tables also. 
> Data dump for non-native tables should never be allowed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-15976) Support CURRENT_CATALOG and CURRENT_SCHEMA

2018-06-19 Thread Laszlo Bodor (JIRA)


 [ 
https://issues.apache.org/jira/browse/HIVE-15976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Laszlo Bodor updated HIVE-15976:

Attachment: HIVE-15976.07.patch

> Support CURRENT_CATALOG and CURRENT_SCHEMA
> --
>
> Key: HIVE-15976
> URL: https://issues.apache.org/jira/browse/HIVE-15976
> Project: Hive
>  Issue Type: Sub-task
>  Components: SQL
>Reporter: Carter Shanklin
>Assignee: Laszlo Bodor
>Priority: Major
> Attachments: HIVE-15976.01.patch, HIVE-15976.02.patch, 
> HIVE-15976.03.patch, HIVE-15976.04.patch, HIVE-15976.05.patch, 
> HIVE-15976.06.patch, HIVE-15976.07.patch
>
>
> Support these keywords for querying the current catalog and schema. SQL 
> reference: section 6.4
> *oracle*
> CREATE TABLE CURRENT_SCHEMA (col VARCHAR2(1)); -- ok
> SELECT CURRENT_SCHEMA FROM DUAL; -- error, ORA-00904: "CURRENT_SCHEMA": 
> invalid identifier
> SELECT CURRENT_SCHEMA() FROM DUAL; -- error, ORA-00904: "CURRENT_SCHEMA": 
> invalid identifier
> *postgres*
> CREATE TABLE CURRENT_SCHEMA (col VARCHAR(1)); -- error: syntax error at or 
> near "CURRENT_SCHEMA"
> SELECT CURRENT_SCHEMA; -- ok, "public"
> SELECT CURRENT_SCHEMA(); -- ok, "public"
> *mysql*
> CREATE TABLE CURRENT_SCHEMA (col VARCHAR(1)); -- ok
> SELECT CURRENT_SCHEMA; -- error, Unknown column 'CURRENT_SCHEMA' in 'field 
> list'
> SELECT CURRENT_SCHEMA(); -- error, FUNCTION db_9_e28e6f.CURRENT_SCHEMA does 
> not exist



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >