[jira] [Updated] (HIVE-19498) Vectorization: CAST expressions produce wrong results
[ https://issues.apache.org/jira/browse/HIVE-19498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19498: Attachment: HIVE-19498.03.patch > Vectorization: CAST expressions produce wrong results > - > > Key: HIVE-19498 > URL: https://issues.apache.org/jira/browse/HIVE-19498 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Blocker > Fix For: 3.1.0 > > Attachments: HIVE-19498.01.patch, HIVE-19498.02.patch, > HIVE-19498.03.patch > > > Wrong results for: > DATE --> BOOLEAN > DOUBLE --> DECIMAL > STRING|CHAR|VARCHAR --> DECIMAL > TIMESTAMP --> LONG -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19498) Vectorization: CAST expressions produce wrong results
[ https://issues.apache.org/jira/browse/HIVE-19498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19498: Attachment: (was: HIVE-19498.03.patch) > Vectorization: CAST expressions produce wrong results > - > > Key: HIVE-19498 > URL: https://issues.apache.org/jira/browse/HIVE-19498 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Blocker > Fix For: 3.1.0 > > Attachments: HIVE-19498.01.patch, HIVE-19498.02.patch > > > Wrong results for: > DATE --> BOOLEAN > DOUBLE --> DECIMAL > STRING|CHAR|VARCHAR --> DECIMAL > TIMESTAMP --> LONG -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19498) Vectorization: CAST expressions produce wrong results
[ https://issues.apache.org/jira/browse/HIVE-19498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19498: Status: In Progress (was: Patch Available) > Vectorization: CAST expressions produce wrong results > - > > Key: HIVE-19498 > URL: https://issues.apache.org/jira/browse/HIVE-19498 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Blocker > Fix For: 3.1.0 > > Attachments: HIVE-19498.01.patch, HIVE-19498.02.patch, > HIVE-19498.03.patch > > > Wrong results for: > DATE --> BOOLEAN > DOUBLE --> DECIMAL > STRING|CHAR|VARCHAR --> DECIMAL > TIMESTAMP --> LONG -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19498) Vectorization: CAST expressions produce wrong results
[ https://issues.apache.org/jira/browse/HIVE-19498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19498: Status: Patch Available (was: In Progress) > Vectorization: CAST expressions produce wrong results > - > > Key: HIVE-19498 > URL: https://issues.apache.org/jira/browse/HIVE-19498 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Blocker > Fix For: 3.1.0 > > Attachments: HIVE-19498.01.patch, HIVE-19498.02.patch, > HIVE-19498.03.patch > > > Wrong results for: > DATE --> BOOLEAN > DOUBLE --> DECIMAL > STRING|CHAR|VARCHAR --> DECIMAL > TIMESTAMP --> LONG -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19490) Locking on Insert into for non native and managed tables.
[ https://issues.apache.org/jira/browse/HIVE-19490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476870#comment-16476870 ] Hive QA commented on HIVE-19490: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 42s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 31s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 56s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 25s{color} | {color:blue} druid-handler in master has 12 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 49s{color} | {color:blue} ql in master has 2320 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 14s{color} | {color:red} druid-handler in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} druid-handler: The patch generated 0 new + 44 unchanged - 12 fixed = 44 total (was 56) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 46s{color} | {color:red} ql: The patch generated 3 new + 1023 unchanged - 52 fixed = 1026 total (was 1075) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 25m 19s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10980/dev-support/hive-personality.sh | | git revision | master / 38c757c | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | mvninstall | http://104.198.109.242/logs//PreCommit-HIVE-Build-10980/yetus/patch-mvninstall-druid-handler.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-10980/yetus/diff-checkstyle-ql.txt | | modules | C: druid-handler ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10980/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Locking on Insert into for non native and managed tables. > - > > Key: HIVE-19490 > URL: https://issues.apache.org/jira/browse/HIVE-19490 > Project: Hive > Issue Type: Improvement > Components: Druid integration >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Labels: druid, locking > Attachments: HIVE-19490.2.patch, HIVE-19490.3.patch, > HIVE-19490.4.patch, HIVE-19490.5.patch, HIVE-19490.patch > > > Current state of the art: > Managed non native table like Druid Tables, will need to get a Lock on Insert > into or insert Over write. The nature of this
[jira] [Commented] (HIVE-18748) Rename table impacts the ACID behaviour as table names are not updated in meta-tables.
[ https://issues.apache.org/jira/browse/HIVE-18748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476844#comment-16476844 ] Hive QA commented on HIVE-18748: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12923586/HIVE-18748.04.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 14397 tests executed *Failed tests:* {noformat} TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=94) [infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,scriptfile1.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q] org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_4] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_bmj_schema_evolution] (batchId=154) org.apache.hadoop.hive.ql.TestAutoPurgeTables.testNoAutoPurge (batchId=233) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10979/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10979/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10979/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 4 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12923586 - PreCommit-HIVE-Build > Rename table impacts the ACID behaviour as table names are not updated in > meta-tables. > -- > > Key: HIVE-18748 > URL: https://issues.apache.org/jira/browse/HIVE-18748 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, Transactions >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Eugene Koifman >Priority: Critical > Labels: ACID, DDL > Attachments: HIVE-18748.02.patch, HIVE-18748.03.patch, > HIVE-18748.04.patch > > > ACID implementation uses metatables such as TXN_COMPONENTS, > COMPLETED_TXN_COMPONENTS, COMPACTION_QUEUE, COMPLETED_COMPCTION_QUEUE etc to > manage ACID operations. > Per table write ID implementation (HIVE-18192) introduces couple of > metatables such as NEXT_WRITE_ID and TXN_TO_WRITE_ID to manage write ids > allocated per table. > Now, when we rename any tables, it is necessary to update the corresponding > table names in these metatables as well. Otherwise, ACID table operations > won't work properly. > Since, this change is significant and have other side-effects, we propose to > disable rename tables on ACID tables until a fix is figured out. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19485) dump directory for non native tables should not be created
[ https://issues.apache.org/jira/browse/HIVE-19485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476834#comment-16476834 ] ASF GitHub Bot commented on HIVE-19485: --- GitHub user anishek opened a pull request: https://github.com/apache/hive/pull/350 HIVE-19485 : dump directory for non native tables should not be created You can merge this pull request into a Git repository by running: $ git pull https://github.com/anishek/hive HIVE-19485 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/hive/pull/350.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #350 commit 83d9e701bb317c989372d2c523912cc0cc1e9eae Author: Anishek AgarwalDate: 2018-05-10T10:37:26Z HIVE-19485 : dump directory for non native tables should not be created > dump directory for non native tables should not be created > -- > > Key: HIVE-19485 > URL: https://issues.apache.org/jira/browse/HIVE-19485 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: anishek >Assignee: anishek >Priority: Major > Labels: pull-request-available > Fix For: 3.1.0 > > Attachments: HIVE-19485.0.patch, HIVE-19485.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19485) dump directory for non native tables should not be created
[ https://issues.apache.org/jira/browse/HIVE-19485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-19485: -- Labels: pull-request-available (was: ) > dump directory for non native tables should not be created > -- > > Key: HIVE-19485 > URL: https://issues.apache.org/jira/browse/HIVE-19485 > Project: Hive > Issue Type: Bug >Affects Versions: 3.1.0 >Reporter: anishek >Assignee: anishek >Priority: Major > Labels: pull-request-available > Fix For: 3.1.0 > > Attachments: HIVE-19485.0.patch, HIVE-19485.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19500) Prevent multiple selectivity estimations for the same variable in conjuctions
[ https://issues.apache.org/jira/browse/HIVE-19500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476824#comment-16476824 ] Ashutosh Chauhan commented on HIVE-19500: - +1 > Prevent multiple selectivity estimations for the same variable in conjuctions > - > > Key: HIVE-19500 > URL: https://issues.apache.org/jira/browse/HIVE-19500 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.0.0, 3.1.0 >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-19500.01.patch, HIVE-19500.02.patch > > > see HIVE-19097 for problem description > for filters like: {{(d_year in (2001,2002) and d_year = 2001)}} the current > estimation is around {{(1/NDV)**2}} (iff column stats are available) > actually the source of the problem was a small typo in HIVE-17465 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18748) Rename table impacts the ACID behaviour as table names are not updated in meta-tables.
[ https://issues.apache.org/jira/browse/HIVE-18748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476818#comment-16476818 ] Hive QA commented on HIVE-18748: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 45s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 46s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 43s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 2s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 52s{color} | {color:blue} ql in master has 2320 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 53s{color} | {color:blue} standalone-metastore in master has 215 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 27s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 36s{color} | {color:red} ql: The patch generated 2 new + 2 unchanged - 0 fixed = 4 total (was 2) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s{color} | {color:red} standalone-metastore: The patch generated 10 new + 620 unchanged - 0 fixed = 630 total (was 620) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 7s{color} | {color:red} standalone-metastore generated 1 new + 215 unchanged - 0 fixed = 216 total (was 215) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 0s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 34m 32s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:standalone-metastore | | | org.apache.hadoop.hive.metastore.txn.TxnHandler.onRename(String, String, String, String, String, String, String, String) passes a nonconstant String to an execute or addBatch method on an SQL statement At TxnHandler.java:String, String, String) passes a nonconstant String to an execute or addBatch method on an SQL statement At TxnHandler.java:[line 2918] | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10979/dev-support/hive-personality.sh | | git revision | master / 38c757c | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-10979/yetus/diff-checkstyle-ql.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-10979/yetus/diff-checkstyle-standalone-metastore.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-10979/yetus/new-findbugs-standalone-metastore.html | | modules | C: ql standalone-metastore U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10979/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Rename table impacts the ACID behaviour as table names are not updated in > meta-tables. >
[jira] [Updated] (HIVE-19490) Locking on Insert into for non native and managed tables.
[ https://issues.apache.org/jira/browse/HIVE-19490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] slim bouguerra updated HIVE-19490: -- Attachment: HIVE-19490.5.patch > Locking on Insert into for non native and managed tables. > - > > Key: HIVE-19490 > URL: https://issues.apache.org/jira/browse/HIVE-19490 > Project: Hive > Issue Type: Improvement > Components: Druid integration >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Labels: druid, locking > Attachments: HIVE-19490.2.patch, HIVE-19490.3.patch, > HIVE-19490.4.patch, HIVE-19490.5.patch, HIVE-19490.patch > > > Current state of the art: > Managed non native table like Druid Tables, will need to get a Lock on Insert > into or insert Over write. The nature of this lock is set to Exclusive by > default for any non native table. > This implies that Inserts into Druid table will Lock any read query as well > during the execution of the insert into. IMO this lock (on insert into) is > not needed since the insert statement is appending data and the state of > loading it is managed partially by Hive Storage handler hook and part of it > by Druid. > What i am proposing is to relax the lock level to shared for all non native > tables on insert into operations and keep it as Exclusive Write for insert > Overwrite for now. > > Any feedback is welcome. > cc [~ekoifman] / [~ashutoshc] / [~jdere] / [~hagleitn] > Also am not sure what is the best way to unit test this currently am using > debugger to check if locks are what i except, please let me know if there is > a better way to do this. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19490) Locking on Insert into for non native and managed tables.
[ https://issues.apache.org/jira/browse/HIVE-19490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476802#comment-16476802 ] Hive QA commented on HIVE-19490: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12923583/HIVE-19490.4.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10978/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10978/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10978/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-05-16 03:58:36.246 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-10978/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-05-16 03:58:36.249 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 38c757c HIVE-19495: Arrow SerDe itest failure (Teddy Choi, reviewed by Matt McCline) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 38c757c HIVE-19495: Arrow SerDe itest failure (Teddy Choi, reviewed by Matt McCline) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-05-16 03:58:37.201 + rm -rf ../yetus_PreCommit-HIVE-Build-10978 + mkdir ../yetus_PreCommit-HIVE-Build-10978 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-10978 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-10978/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/io/arrow/ArrowColumnarBatchSerDe.java:1 error: ql/src/java/org/apache/hadoop/hive/ql/io/arrow/ArrowColumnarBatchSerDe.java: patch does not apply error: patch failed: ql/src/test/org/apache/hadoop/hive/ql/io/arrow/TestArrowColumnarBatchSerDe.java:1 error: ql/src/test/org/apache/hadoop/hive/ql/io/arrow/TestArrowColumnarBatchSerDe.java: patch does not apply fatal: git apply: bad git-diff - inconsistent old filename on line 1060 fatal: git diff header lacks filename information when removing 2 leading pathname components (line 71) The patch does not appear to apply with p0, p1, or p2 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12923583 - PreCommit-HIVE-Build > Locking on Insert into for non native and managed tables. > - > > Key: HIVE-19490 > URL: https://issues.apache.org/jira/browse/HIVE-19490 > Project: Hive > Issue Type: Improvement > Components: Druid integration >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Labels: druid, locking > Attachments: HIVE-19490.2.patch, HIVE-19490.3.patch, > HIVE-19490.4.patch, HIVE-19490.patch > > > Current state of the art: > Managed non native table like Druid Tables, will need to get a Lock on Insert > into or insert Over write. The nature of this lock is set to Exclusive by > default for any non native table. > This implies that Inserts into Druid table will Lock any read query as well > during the execution of the insert into. IMO this lock (on insert into) is > not needed since the insert statement is appending data and the state of > loading it is managed partially by Hive Storage handler hook and part of it > by Druid. > What i am proposing is to relax the lock level to shared for all non native > tables on insert into operations and keep it as Exclusive Write for
[jira] [Commented] (HIVE-18875) Enable SMB Join by default in Tez
[ https://issues.apache.org/jira/browse/HIVE-18875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476799#comment-16476799 ] Hive QA commented on HIVE-18875: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12923548/HIVE-18875.3.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 34 failed/errored test(s), 14405 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_12] (batchId=253) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[smb_mapjoin_12] (batchId=10) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[cbo_subq_exists] (batchId=173) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[cbo_subq_not_in] (batchId=167) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[correlationoptimizer2] (batchId=166) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[correlationoptimizer6] (batchId=166) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_1] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[groupby_grouping_id2] (batchId=162) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[limit_pushdown] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mrr] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[offset_limit_ppd_optimizer] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_1] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[smb_cache] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[smb_mapjoin_14] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[smb_mapjoin_17] (batchId=155) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[smb_mapjoin_4] (batchId=159) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[smb_mapjoin_6] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_in_having] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[subquery_notin] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[table_access_keys_stats] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_stats] (batchId=159) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_groupby_grouping_id2] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_bucketmapjoin1] (batchId=159) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketmapjoin1] (batchId=146) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[skewjoinopt19] (batchId=117) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[skewjoinopt20] (batchId=142) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[smb_mapjoin_11] (batchId=107) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[smb_mapjoin_12] (batchId=111) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[smb_mapjoin_14] (batchId=135) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[smb_mapjoin_17] (batchId=109) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[smb_mapjoin_4] (batchId=118) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[smb_mapjoin_6] (batchId=122) org.apache.hadoop.hive.metastore.client.TestRuntimeStats.testCleanup[Remote] (batchId=209) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10977/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10977/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10977/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 34 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12923548 - PreCommit-HIVE-Build > Enable SMB Join by default in Tez > - > > Key: HIVE-18875 > URL: https://issues.apache.org/jira/browse/HIVE-18875 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-18875.1.patch, HIVE-18875.2.patch, > HIVE-18875.3.patch > > -- This
[jira] [Commented] (HIVE-18748) Rename table impacts the ACID behaviour as table names are not updated in meta-tables.
[ https://issues.apache.org/jira/browse/HIVE-18748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476798#comment-16476798 ] Eugene Koifman commented on HIVE-18748: --- patch 4 addressing comments. > Rename table impacts the ACID behaviour as table names are not updated in > meta-tables. > -- > > Key: HIVE-18748 > URL: https://issues.apache.org/jira/browse/HIVE-18748 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, Transactions >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Eugene Koifman >Priority: Critical > Labels: ACID, DDL > Attachments: HIVE-18748.02.patch, HIVE-18748.03.patch, > HIVE-18748.04.patch > > > ACID implementation uses metatables such as TXN_COMPONENTS, > COMPLETED_TXN_COMPONENTS, COMPACTION_QUEUE, COMPLETED_COMPCTION_QUEUE etc to > manage ACID operations. > Per table write ID implementation (HIVE-18192) introduces couple of > metatables such as NEXT_WRITE_ID and TXN_TO_WRITE_ID to manage write ids > allocated per table. > Now, when we rename any tables, it is necessary to update the corresponding > table names in these metatables as well. Otherwise, ACID table operations > won't work properly. > Since, this change is significant and have other side-effects, we propose to > disable rename tables on ACID tables until a fix is figured out. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18748) Rename table impacts the ACID behaviour as table names are not updated in meta-tables.
[ https://issues.apache.org/jira/browse/HIVE-18748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18748: -- Attachment: HIVE-18748.04.patch > Rename table impacts the ACID behaviour as table names are not updated in > meta-tables. > -- > > Key: HIVE-18748 > URL: https://issues.apache.org/jira/browse/HIVE-18748 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, Transactions >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Eugene Koifman >Priority: Critical > Labels: ACID, DDL > Attachments: HIVE-18748.02.patch, HIVE-18748.03.patch, > HIVE-18748.04.patch > > > ACID implementation uses metatables such as TXN_COMPONENTS, > COMPLETED_TXN_COMPONENTS, COMPACTION_QUEUE, COMPLETED_COMPCTION_QUEUE etc to > manage ACID operations. > Per table write ID implementation (HIVE-18192) introduces couple of > metatables such as NEXT_WRITE_ID and TXN_TO_WRITE_ID to manage write ids > allocated per table. > Now, when we rename any tables, it is necessary to update the corresponding > table names in these metatables as well. Otherwise, ACID table operations > won't work properly. > Since, this change is significant and have other side-effects, we propose to > disable rename tables on ACID tables until a fix is figured out. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18875) Enable SMB Join by default in Tez
[ https://issues.apache.org/jira/browse/HIVE-18875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476789#comment-16476789 ] Hive QA commented on HIVE-18875: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 43s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 39s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 33s{color} | {color:blue} common in master has 62 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 54s{color} | {color:blue} ql in master has 2320 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} The patch common passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} ql: The patch generated 0 new + 32 unchanged - 1 fixed = 32 total (was 33) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 26m 10s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10977/dev-support/hive-personality.sh | | git revision | master / 38c757c | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: common ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10977/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Enable SMB Join by default in Tez > - > > Key: HIVE-18875 > URL: https://issues.apache.org/jira/browse/HIVE-18875 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-18875.1.patch, HIVE-18875.2.patch, > HIVE-18875.3.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HIVE-19492) Update Wiki with load data extension syntax
[ https://issues.apache.org/jira/browse/HIVE-19492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal resolved HIVE-19492. --- Resolution: Fixed Updated Wiki https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DML#LanguageManualDML-Loadingfilesintotables > Update Wiki with load data extension syntax > --- > > Key: HIVE-19492 > URL: https://issues.apache.org/jira/browse/HIVE-19492 > Project: Hive > Issue Type: Sub-task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19453) Extend Load Data statement to take Input file format and Serde as parameters
[ https://issues.apache.org/jira/browse/HIVE-19453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-19453: -- Resolution: Fixed Status: Resolved (was: Patch Available) > Extend Load Data statement to take Input file format and Serde as parameters > > > Key: HIVE-19453 > URL: https://issues.apache.org/jira/browse/HIVE-19453 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-19453.01-branch-3.patch, HIVE-19453.1.patch > > > Extend the load data statement to take the inputformat of the source files > and the serde to interpret it as parameter. For eg, > > load data local inpath > '../../data/files/load_data_job/partitions/load_data_2_partitions.txt' INTO > TABLE srcbucket_mapjoin > INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat' > SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'; -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19490) Locking on Insert into for non native and managed tables.
[ https://issues.apache.org/jira/browse/HIVE-19490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] slim bouguerra updated HIVE-19490: -- Attachment: HIVE-19490.4.patch > Locking on Insert into for non native and managed tables. > - > > Key: HIVE-19490 > URL: https://issues.apache.org/jira/browse/HIVE-19490 > Project: Hive > Issue Type: Improvement > Components: Druid integration >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Labels: druid, locking > Attachments: HIVE-19490.2.patch, HIVE-19490.3.patch, > HIVE-19490.4.patch, HIVE-19490.patch > > > Current state of the art: > Managed non native table like Druid Tables, will need to get a Lock on Insert > into or insert Over write. The nature of this lock is set to Exclusive by > default for any non native table. > This implies that Inserts into Druid table will Lock any read query as well > during the execution of the insert into. IMO this lock (on insert into) is > not needed since the insert statement is appending data and the state of > loading it is managed partially by Hive Storage handler hook and part of it > by Druid. > What i am proposing is to relax the lock level to shared for all non native > tables on insert into operations and keep it as Exclusive Write for insert > Overwrite for now. > > Any feedback is welcome. > cc [~ekoifman] / [~ashutoshc] / [~jdere] / [~hagleitn] > Also am not sure what is the best way to unit test this currently am using > debugger to check if locks are what i except, please let me know if there is > a better way to do this. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19490) Locking on Insert into for non native and managed tables.
[ https://issues.apache.org/jira/browse/HIVE-19490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476779#comment-16476779 ] slim bouguerra commented on HIVE-19490: --- Thanks for review and feedback i have uploaded new patch to fix stylecheck issues > Locking on Insert into for non native and managed tables. > - > > Key: HIVE-19490 > URL: https://issues.apache.org/jira/browse/HIVE-19490 > Project: Hive > Issue Type: Improvement > Components: Druid integration >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Labels: druid, locking > Attachments: HIVE-19490.2.patch, HIVE-19490.3.patch, > HIVE-19490.4.patch, HIVE-19490.patch > > > Current state of the art: > Managed non native table like Druid Tables, will need to get a Lock on Insert > into or insert Over write. The nature of this lock is set to Exclusive by > default for any non native table. > This implies that Inserts into Druid table will Lock any read query as well > during the execution of the insert into. IMO this lock (on insert into) is > not needed since the insert statement is appending data and the state of > loading it is managed partially by Hive Storage handler hook and part of it > by Druid. > What i am proposing is to relax the lock level to shared for all non native > tables on insert into operations and keep it as Exclusive Write for insert > Overwrite for now. > > Any feedback is welcome. > cc [~ekoifman] / [~ashutoshc] / [~jdere] / [~hagleitn] > Also am not sure what is the best way to unit test this currently am using > debugger to check if locks are what i except, please let me know if there is > a better way to do this. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19115) Merge: Semijoin hints are dropped by the merge
[ https://issues.apache.org/jira/browse/HIVE-19115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal reassigned HIVE-19115: - Assignee: Deepak Jaiswal > Merge: Semijoin hints are dropped by the merge > -- > > Key: HIVE-19115 > URL: https://issues.apache.org/jira/browse/HIVE-19115 > Project: Hive > Issue Type: Bug > Components: Query Planning, Transactions >Reporter: Gopal V >Assignee: Deepak Jaiswal >Priority: Major > > {code} > create table target stored as orc as select ss_ticket_number, ss_item_sk, > current_timestamp as `ts` from tpcds_bin_partitioned_orc_1000.store_sales; > create table source stored as orc as select sr_ticket_number, sr_item_sk, > d_date from tpcds_bin_partitioned_orc_1000.store_returns join > tpcds_bin_partitioned_orc_1000.date_dim where d_date_sk = sr_returned_date_sk; > merge /* +semi(T, sr_ticket_number, S, 1) */ into target T using (select > * from source where year(d_date) = 1998) S ON T.ss_ticket_number = > S.sr_ticket_number and sr_item_sk = ss_item_sk > when matched THEN UPDATE SET ts = current_timestamp > when not matched and sr_item_sk is not null and sr_ticket_number is not null > THEN INSERT VALUES(S.sr_ticket_number, S.sr_item_sk, current_timestamp); > {code} > The semijoin hints are ignored and the code says > {code} > todo: do we care to preserve comments in original SQL? > {code} > https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/parse/UpdateDeleteSemanticAnalyzer.java#L624 > in this case we do. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18748) Rename table impacts the ACID behaviour as table names are not updated in meta-tables.
[ https://issues.apache.org/jira/browse/HIVE-18748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476744#comment-16476744 ] Hive QA commented on HIVE-18748: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12923384/HIVE-18748.03.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 14405 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby8] (batchId=77) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[windowing_navfn] (batchId=70) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[explainuser_4] (batchId=164) org.apache.hive.jdbc.TestMultiSessionsHS2WithLocalClusterSpark.testSparkQuery (batchId=243) org.apache.hive.service.server.TestInformationSchemaWithPrivilege.test (batchId=238) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10976/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10976/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10976/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 5 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12923384 - PreCommit-HIVE-Build > Rename table impacts the ACID behaviour as table names are not updated in > meta-tables. > -- > > Key: HIVE-18748 > URL: https://issues.apache.org/jira/browse/HIVE-18748 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, Transactions >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Eugene Koifman >Priority: Critical > Labels: ACID, DDL > Attachments: HIVE-18748.02.patch, HIVE-18748.03.patch > > > ACID implementation uses metatables such as TXN_COMPONENTS, > COMPLETED_TXN_COMPONENTS, COMPACTION_QUEUE, COMPLETED_COMPCTION_QUEUE etc to > manage ACID operations. > Per table write ID implementation (HIVE-18192) introduces couple of > metatables such as NEXT_WRITE_ID and TXN_TO_WRITE_ID to manage write ids > allocated per table. > Now, when we rename any tables, it is necessary to update the corresponding > table names in these metatables as well. Otherwise, ACID table operations > won't work properly. > Since, this change is significant and have other side-effects, we propose to > disable rename tables on ACID tables until a fix is figured out. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19560) Retry test runner and retry rule for flaky tests
[ https://issues.apache.org/jira/browse/HIVE-19560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476729#comment-16476729 ] Vineet Garg commented on HIVE-19560: Sounds good. > Retry test runner and retry rule for flaky tests > > > Key: HIVE-19560 > URL: https://issues.apache.org/jira/browse/HIVE-19560 > Project: Hive > Issue Type: Improvement > Components: Test >Affects Versions: 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19560.1.patch > > > Implement custom test runner that retries failed tests as a workaround for > flakiness. Also a test rule for retrying failed tests (for cases where custom > test runner is not possible, e.g ParametrizedTests which already is a > customer TestRunner). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19516) TestNegative merge_negative_5 and mm_concatenate are causing timeouts
[ https://issues.apache.org/jira/browse/HIVE-19516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476728#comment-16476728 ] Sergey Shelukhin commented on HIVE-19516: - It starts txn on metastore and waits for it to finish. I guess the tests are not set up with compactor and so it never finishes... [~ekoifman] I wonder if we could detect that and fail, actually... > TestNegative merge_negative_5 and mm_concatenate are causing timeouts > - > > Key: HIVE-19516 > URL: https://issues.apache.org/jira/browse/HIVE-19516 > Project: Hive > Issue Type: Bug > Components: Testing Infrastructure >Reporter: Vineet Garg >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19516.patch > > > I haven't tried to reproduce this in isolation but it is reproducible if you > run in batch on local system > {noformat} > mvn -B test -Dtest.groups= -Dtest=TestNegativeCliDriver >
[jira] [Commented] (HIVE-18748) Rename table impacts the ACID behaviour as table names are not updated in meta-tables.
[ https://issues.apache.org/jira/browse/HIVE-18748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476721#comment-16476721 ] Hive QA commented on HIVE-18748: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 44s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 39s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 44s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 56s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 50s{color} | {color:blue} ql in master has 2320 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 54s{color} | {color:blue} standalone-metastore in master has 215 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 7s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 36s{color} | {color:red} ql: The patch generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s{color} | {color:red} standalone-metastore: The patch generated 10 new + 620 unchanged - 0 fixed = 630 total (was 620) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 3m 3s{color} | {color:red} standalone-metastore generated 1 new + 215 unchanged - 0 fixed = 216 total (was 215) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 33m 56s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:standalone-metastore | | | org.apache.hadoop.hive.metastore.txn.TxnHandler.onRename(String, String, String, String, String, String, String, String) passes a nonconstant String to an execute or addBatch method on an SQL statement At TxnHandler.java:String, String, String) passes a nonconstant String to an execute or addBatch method on an SQL statement At TxnHandler.java:[line 2913] | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10976/dev-support/hive-personality.sh | | git revision | master / bcf4072 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-10976/yetus/diff-checkstyle-ql.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-10976/yetus/diff-checkstyle-standalone-metastore.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-10976/yetus/new-findbugs-standalone-metastore.html | | modules | C: ql standalone-metastore U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10976/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Rename table impacts the ACID behaviour as table names are not updated in > meta-tables. >
[jira] [Commented] (HIVE-19516) TestNegative merge_negative_5 and mm_concatenate are causing timeouts
[ https://issues.apache.org/jira/browse/HIVE-19516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476719#comment-16476719 ] Vineet Garg commented on HIVE-19516: +1 for the patch. Question about the former change: does that mean running {{alter table concatenate}} will run forever now instead of failing? What is the expected behavior of running {{alter table concatenate}} on a transactional table? > TestNegative merge_negative_5 and mm_concatenate are causing timeouts > - > > Key: HIVE-19516 > URL: https://issues.apache.org/jira/browse/HIVE-19516 > Project: Hive > Issue Type: Bug > Components: Testing Infrastructure >Reporter: Vineet Garg >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19516.patch > > > I haven't tried to reproduce this in isolation but it is reproducible if you > run in batch on local system > {noformat} > mvn -B test -Dtest.groups= -Dtest=TestNegativeCliDriver >
[jira] [Assigned] (HIVE-19568) Active/Passive HS2 HA: Disallow direct connection to passive HS2 instance
[ https://issues.apache.org/jira/browse/HIVE-19568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran reassigned HIVE-19568: > Active/Passive HS2 HA: Disallow direct connection to passive HS2 instance > - > > Key: HIVE-19568 > URL: https://issues.apache.org/jira/browse/HIVE-19568 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > > The recommended usage for clients when connecting to HS2 with Active/Passive > HA configuration is via ZK service discovery URL. But some applications do > not support ZK service discovery in which case they use direct URL to connect > to HS2 instance. If direct connection is to passive HS2 instance, the > connection should be dropped with proper error message. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19495) Arrow SerDe itest failure
[ https://issues.apache.org/jira/browse/HIVE-19495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476712#comment-16476712 ] Matt McCline commented on HIVE-19495: - Patch #3 committed to master. > Arrow SerDe itest failure > - > > Key: HIVE-19495 > URL: https://issues.apache.org/jira/browse/HIVE-19495 > Project: Hive > Issue Type: Sub-task > Components: Serializers/Deserializers >Reporter: Eric Wohlstadter >Assignee: Teddy Choi >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19495.1.patch, HIVE-19495.2.patch, > HIVE-19495.3.patch > > > "You tried to write a Bit type when you are using a ValueWriter of type > NullableMapWriter." -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19495) Arrow SerDe itest failure
[ https://issues.apache.org/jira/browse/HIVE-19495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19495: Resolution: Fixed Status: Resolved (was: Patch Available) > Arrow SerDe itest failure > - > > Key: HIVE-19495 > URL: https://issues.apache.org/jira/browse/HIVE-19495 > Project: Hive > Issue Type: Sub-task > Components: Serializers/Deserializers >Reporter: Eric Wohlstadter >Assignee: Teddy Choi >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19495.1.patch, HIVE-19495.2.patch, > HIVE-19495.3.patch > > > "You tried to write a Bit type when you are using a ValueWriter of type > NullableMapWriter." -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-4367) enhance TRUNCATE syntex to drop data of external table
[ https://issues.apache.org/jira/browse/HIVE-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476699#comment-16476699 ] Hive QA commented on HIVE-4367: --- Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12923374/HIVE-4367.3.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10974/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10974/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10974/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12923374/HIVE-4367.3.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12923374 - PreCommit-HIVE-Build > enhance TRUNCATE syntex to drop data of external table > > > Key: HIVE-4367 > URL: https://issues.apache.org/jira/browse/HIVE-4367 > Project: Hive > Issue Type: Improvement > Components: Query Processor >Affects Versions: 0.11.0 >Reporter: caofangkun >Assignee: caofangkun >Priority: Minor > Attachments: HIVE-4367-1.patch, HIVE-4367.2.patch.txt, > HIVE-4367.3.patch > > > In my use case , > sometimes I have to remove data of external tables to free up storage space > of the cluster . > So it's necessary for to enhance the syntax like > "TRUNCATE TABLE srcpart_truncate PARTITION (dt='201130412') FORCE;" > to remove data from EXTERNAL table. > And I add a configuration property to enable remove data to Trash > > hive.truncate.skiptrash > false > > if true will remove data to trash, else false drop data immediately > > > For example : > hive (default)> TRUNCATE TABLE external1 partition (ds='11'); > FAILED: Error in semantic analysis: Cannot truncate non-managed table > external1 > hive (default)> TRUNCATE TABLE external1 partition (ds='11') FORCE; > [2013-04-16 17:15:52]: Compile Start > [2013-04-16 17:15:52]: Compile End > [2013-04-16 17:15:52]: OK > [2013-04-16 17:15:52]: Time taken: 0.413 seconds > hive (default)> set hive.truncate.skiptrash; > hive.truncate.skiptrash=false > hive (default)> set hive.truncate.skiptrash=true; > hive (default)> TRUNCATE TABLE external1 partition (ds='12') FORCE; > [2013-04-16 17:16:21]: Compile Start > [2013-04-16 17:16:21]: Compile End > [2013-04-16 17:16:21]: OK > [2013-04-16 17:16:21]: Time taken: 0.143 seconds > hive (default)> dfs -ls /user/test/.Trash/Current/; > Found 1 items > drwxr-xr-x -test supergroup 0 2013-04-16 17:06 /user/test/.Trash/Current/ds=11 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19465) Upgrade ORC to 1.5.0
[ https://issues.apache.org/jira/browse/HIVE-19465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476697#comment-16476697 ] Hive QA commented on HIVE-19465: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12923375/HIVE-19465.02.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10973/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10973/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10973/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12923375/HIVE-19465.02.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12923375 - PreCommit-HIVE-Build > Upgrade ORC to 1.5.0 > > > Key: HIVE-19465 > URL: https://issues.apache.org/jira/browse/HIVE-19465 > Project: Hive > Issue Type: Improvement > Components: File Formats >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Blocker > Attachments: HIVE-19465.01.patch, HIVE-19465.02.patch, > HIVE-19465.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19567) Fix flakiness in TestTriggers
[ https://issues.apache.org/jira/browse/HIVE-19567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476698#comment-16476698 ] Prasanth Jayachandran commented on HIVE-19567: -- Hopefully. On the repro cluster I tried this patch several times and it never failed (without it failed with timeout and assertion error). > Fix flakiness in TestTriggers > - > > Key: HIVE-19567 > URL: https://issues.apache.org/jira/browse/HIVE-19567 > Project: Hive > Issue Type: Bug > Components: Test >Affects Versions: 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19567.1.patch > > > Identified another flakiness in TestTriggersMoveWorkloadManager which can > cause intermittent test failures. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19465) Upgrade ORC to 1.5.0
[ https://issues.apache.org/jira/browse/HIVE-19465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476693#comment-16476693 ] Hive QA commented on HIVE-19465: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12923375/HIVE-19465.02.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 111 failed/errored test(s), 14404 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[materialized_view_create_rewrite] (batchId=253) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin] (batchId=11) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_nullscan] (batchId=68) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] (batchId=55) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_part] (batchId=16) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[annotate_stats_table] (batchId=22) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[columnStatsUpdateForStatsOptimizer_2] (batchId=31) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[deleteAnalyze] (batchId=32) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[extrapolate_part_stats_date] (batchId=21) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[extrapolate_part_stats_full] (batchId=35) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[extrapolate_part_stats_partial] (batchId=50) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[groupby8] (batchId=77) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_file_dump] (batchId=59) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge10] (batchId=66) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge11] (batchId=41) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge12] (batchId=64) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=80) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[typechangetest] (batchId=11) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_case_when_1] (batchId=87) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_case_when_2] (batchId=55) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_char_2] (batchId=71) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_coalesce_2] (batchId=74) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_data_types] (batchId=79) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_interval_1] (batchId=16) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vectorized_timestamp_funcs] (batchId=31) org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver[druidmini_mv] (batchId=253) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning] (batchId=150) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[dynamic_semijoin_user_level] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_nullscan] (batchId=149) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_llap_counters1] (batchId=149) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_llap_counters] (batchId=153) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_merge10] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_merge1] (batchId=149) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_merge2] (batchId=153) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_merge3] (batchId=149) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_merge4] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_basic] (batchId=149) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_ppd_schema_evol_3a] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_vectorization_original] (batchId=174) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[alter_merge_orc] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[alter_merge_stats_orc] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[column_table_stats_orc] (batchId=155) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[default_constraint] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[deleteAnalyze] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynamic_semijoin_reduction] (batchId=166) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_opt_vectorization] (batchId=166)
[jira] [Commented] (HIVE-19567) Fix flakiness in TestTriggers
[ https://issues.apache.org/jira/browse/HIVE-19567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476691#comment-16476691 ] Sergey Shelukhin commented on HIVE-19567: - Hmm... will this also cover when it times out? +1 > Fix flakiness in TestTriggers > - > > Key: HIVE-19567 > URL: https://issues.apache.org/jira/browse/HIVE-19567 > Project: Hive > Issue Type: Bug > Components: Test >Affects Versions: 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19567.1.patch > > > Identified another flakiness in TestTriggersMoveWorkloadManager which can > cause intermittent test failures. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19567) Fix flakiness in TestTriggers
[ https://issues.apache.org/jira/browse/HIVE-19567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476685#comment-16476685 ] Prasanth Jayachandran commented on HIVE-19567: -- cc/ [~jcamachorodriguez] with this the retry annotation should no longer be required (although doesn't hurt). > Fix flakiness in TestTriggers > - > > Key: HIVE-19567 > URL: https://issues.apache.org/jira/browse/HIVE-19567 > Project: Hive > Issue Type: Bug > Components: Test >Affects Versions: 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19567.1.patch > > > Identified another flakiness in TestTriggersMoveWorkloadManager which can > cause intermittent test failures. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19567) Fix flakiness in TestTriggers
[ https://issues.apache.org/jira/browse/HIVE-19567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476684#comment-16476684 ] Prasanth Jayachandran commented on HIVE-19567: -- [~sershe] can you please take a look? > Fix flakiness in TestTriggers > - > > Key: HIVE-19567 > URL: https://issues.apache.org/jira/browse/HIVE-19567 > Project: Hive > Issue Type: Bug > Components: Test >Affects Versions: 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19567.1.patch > > > Identified another flakiness in TestTriggersMoveWorkloadManager which can > cause intermittent test failures. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19465) Upgrade ORC to 1.5.0
[ https://issues.apache.org/jira/browse/HIVE-19465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476683#comment-16476683 ] Hive QA commented on HIVE-19465: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 51s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 31s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 56s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 39s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 39s{color} | {color:blue} llap-server in master has 86 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 52s{color} | {color:blue} ql in master has 2320 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 7m 20s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 42s{color} | {color:green} root: The patch generated 0 new + 478 unchanged - 5 fixed = 478 total (was 483) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} llap-server: The patch generated 0 new + 108 unchanged - 2 fixed = 108 total (was 110) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} ql: The patch generated 0 new + 370 unchanged - 3 fixed = 370 total (was 373) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 41s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 56m 26s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile xml | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10972/dev-support/hive-personality.sh | | git revision | master / bcf4072 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: . llap-server ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10972/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Upgrade ORC to 1.5.0 > > > Key: HIVE-19465 > URL: https://issues.apache.org/jira/browse/HIVE-19465 > Project: Hive > Issue Type: Improvement > Components: File Formats >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Blocker > Attachments: HIVE-19465.01.patch, HIVE-19465.02.patch, > HIVE-19465.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19567) Fix flakiness in TestTriggers
[ https://issues.apache.org/jira/browse/HIVE-19567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476680#comment-16476680 ] Prasanth Jayachandran commented on HIVE-19567: -- On a relatively slower centos machine I was able to reproduce the flakiness that was either causing assertion error or test timeouts. The reason for that is AbstractJdbcTriggersTest waits until last expected message appears in STDERR here https://github.com/apache/hive/blob/master/itests/hive-unit/src/test/java/org/apache/hive/jdbc/AbstractJdbcTriggersTest.java#L164. In the tests, the last expected line json line from WM event summary. When I printed the STDERR, the json output gets printed first followed by text summary. But assertion happens for text first followed by json. When json output appears in stderr while loop is exited and assertion fails as there is no text output. I made it consistent. The post hook prints json and text summary and all tests also asserts in the same order (json first and then text). > Fix flakiness in TestTriggers > - > > Key: HIVE-19567 > URL: https://issues.apache.org/jira/browse/HIVE-19567 > Project: Hive > Issue Type: Bug > Components: Test >Affects Versions: 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19567.1.patch > > > Identified another flakiness in TestTriggersMoveWorkloadManager which can > cause intermittent test failures. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19567) Fix flakiness in TestTriggers
[ https://issues.apache.org/jira/browse/HIVE-19567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-19567: - Attachment: HIVE-19567.1.patch > Fix flakiness in TestTriggers > - > > Key: HIVE-19567 > URL: https://issues.apache.org/jira/browse/HIVE-19567 > Project: Hive > Issue Type: Bug > Components: Test >Affects Versions: 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19567.1.patch > > > Identified another flakiness in TestTriggersMoveWorkloadManager which can > cause intermittent test failures. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19567) Fix flakiness in TestTriggers
[ https://issues.apache.org/jira/browse/HIVE-19567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-19567: - Status: Patch Available (was: Open) > Fix flakiness in TestTriggers > - > > Key: HIVE-19567 > URL: https://issues.apache.org/jira/browse/HIVE-19567 > Project: Hive > Issue Type: Bug > Components: Test >Affects Versions: 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19567.1.patch > > > Identified another flakiness in TestTriggersMoveWorkloadManager which can > cause intermittent test failures. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19567) Fix flakiness in TestTriggers
[ https://issues.apache.org/jira/browse/HIVE-19567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran reassigned HIVE-19567: > Fix flakiness in TestTriggers > - > > Key: HIVE-19567 > URL: https://issues.apache.org/jira/browse/HIVE-19567 > Project: Hive > Issue Type: Bug > Components: Test >Affects Versions: 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > > Identified another flakiness in TestTriggersMoveWorkloadManager which can > cause intermittent test failures. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18453) ACID: Add "CREATE TRANSACTIONAL TABLE" syntax to unify ACID ORC & Parquet support
[ https://issues.apache.org/jira/browse/HIVE-18453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18453: -- Target Version/s: 3.1.0 > ACID: Add "CREATE TRANSACTIONAL TABLE" syntax to unify ACID ORC & Parquet > support > - > > Key: HIVE-18453 > URL: https://issues.apache.org/jira/browse/HIVE-18453 > Project: Hive > Issue Type: Improvement > Components: Transactions >Reporter: Gopal V >Assignee: Igor Kryvenko >Priority: Major > > The ACID table markers are currently done with TBLPROPERTIES which is > inherently fragile. > The "create transactional table" offers a way to standardize the syntax and > allows for future compatibility changes to support Parquet ACIDv2 tables > along with ORC tables. > The ACIDv2 design is format independent, with the ability to add new > vectorized input formats with no changes to the design. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18453) ACID: Add "CREATE TRANSACTIONAL TABLE" syntax to unify ACID ORC & Parquet support
[ https://issues.apache.org/jira/browse/HIVE-18453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476665#comment-16476665 ] Eugene Koifman commented on HIVE-18453: --- [~ikryvenko], I think if the ticket is not assigned to anyone, you should feel free to work on it > ACID: Add "CREATE TRANSACTIONAL TABLE" syntax to unify ACID ORC & Parquet > support > - > > Key: HIVE-18453 > URL: https://issues.apache.org/jira/browse/HIVE-18453 > Project: Hive > Issue Type: Improvement > Components: Transactions >Reporter: Gopal V >Assignee: Igor Kryvenko >Priority: Major > > The ACID table markers are currently done with TBLPROPERTIES which is > inherently fragile. > The "create transactional table" offers a way to standardize the syntax and > allows for future compatibility changes to support Parquet ACIDv2 tables > along with ORC tables. > The ACIDv2 design is format independent, with the ability to add new > vectorized input formats with no changes to the design. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18453) ACID: Add "CREATE TRANSACTIONAL TABLE" syntax to unify ACID ORC & Parquet support
[ https://issues.apache.org/jira/browse/HIVE-18453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman reassigned HIVE-18453: - Assignee: Igor Kryvenko > ACID: Add "CREATE TRANSACTIONAL TABLE" syntax to unify ACID ORC & Parquet > support > - > > Key: HIVE-18453 > URL: https://issues.apache.org/jira/browse/HIVE-18453 > Project: Hive > Issue Type: Improvement > Components: Transactions >Reporter: Gopal V >Assignee: Igor Kryvenko >Priority: Major > > The ACID table markers are currently done with TBLPROPERTIES which is > inherently fragile. > The "create transactional table" offers a way to standardize the syntax and > allows for future compatibility changes to support Parquet ACIDv2 tables > along with ORC tables. > The ACIDv2 design is format independent, with the ability to add new > vectorized input formats with no changes to the design. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18117) Create TestCliDriver for HDFS EC
[ https://issues.apache.org/jira/browse/HIVE-18117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Sherman updated HIVE-18117: -- Attachment: HIVE-18117.6.patch > Create TestCliDriver for HDFS EC > > > Key: HIVE-18117 > URL: https://issues.apache.org/jira/browse/HIVE-18117 > Project: Hive > Issue Type: Sub-task >Reporter: Sahil Takiar >Assignee: Andrew Sherman >Priority: Major > Attachments: HIVE-18117.1.patch, HIVE-18117.2.patch, > HIVE-18117.3.patch, HIVE-18117.4.patch, HIVE-18117.5.patch, HIVE-18117.6.patch > > > Should be able to do something similar to what we do for HDFS encryption. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HIVE-19490) Locking on Insert into for non native and managed tables.
[ https://issues.apache.org/jira/browse/HIVE-19490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476656#comment-16476656 ] Eugene Koifman edited comment on HIVE-19490 at 5/16/18 12:42 AM: - there is a number of new checkstyle warnings, otherwise +1 was (Author: ekoifman): +1 > Locking on Insert into for non native and managed tables. > - > > Key: HIVE-19490 > URL: https://issues.apache.org/jira/browse/HIVE-19490 > Project: Hive > Issue Type: Improvement > Components: Druid integration >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Labels: druid, locking > Attachments: HIVE-19490.2.patch, HIVE-19490.3.patch, HIVE-19490.patch > > > Current state of the art: > Managed non native table like Druid Tables, will need to get a Lock on Insert > into or insert Over write. The nature of this lock is set to Exclusive by > default for any non native table. > This implies that Inserts into Druid table will Lock any read query as well > during the execution of the insert into. IMO this lock (on insert into) is > not needed since the insert statement is appending data and the state of > loading it is managed partially by Hive Storage handler hook and part of it > by Druid. > What i am proposing is to relax the lock level to shared for all non native > tables on insert into operations and keep it as Exclusive Write for insert > Overwrite for now. > > Any feedback is welcome. > cc [~ekoifman] / [~ashutoshc] / [~jdere] / [~hagleitn] > Also am not sure what is the best way to unit test this currently am using > debugger to check if locks are what i except, please let me know if there is > a better way to do this. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19490) Locking on Insert into for non native and managed tables.
[ https://issues.apache.org/jira/browse/HIVE-19490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476656#comment-16476656 ] Eugene Koifman commented on HIVE-19490: --- +1 > Locking on Insert into for non native and managed tables. > - > > Key: HIVE-19490 > URL: https://issues.apache.org/jira/browse/HIVE-19490 > Project: Hive > Issue Type: Improvement > Components: Druid integration >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Labels: druid, locking > Attachments: HIVE-19490.2.patch, HIVE-19490.3.patch, HIVE-19490.patch > > > Current state of the art: > Managed non native table like Druid Tables, will need to get a Lock on Insert > into or insert Over write. The nature of this lock is set to Exclusive by > default for any non native table. > This implies that Inserts into Druid table will Lock any read query as well > during the execution of the insert into. IMO this lock (on insert into) is > not needed since the insert statement is appending data and the state of > loading it is managed partially by Hive Storage handler hook and part of it > by Druid. > What i am proposing is to relax the lock level to shared for all non native > tables on insert into operations and keep it as Exclusive Write for insert > Overwrite for now. > > Any feedback is welcome. > cc [~ekoifman] / [~ashutoshc] / [~jdere] / [~hagleitn] > Also am not sure what is the best way to unit test this currently am using > debugger to check if locks are what i except, please let me know if there is > a better way to do this. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19441) Add support for float aggregator and use LLAP test Driver
[ https://issues.apache.org/jira/browse/HIVE-19441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476648#comment-16476648 ] Ashutosh Chauhan commented on HIVE-19441: - +1 > Add support for float aggregator and use LLAP test Driver > - > > Key: HIVE-19441 > URL: https://issues.apache.org/jira/browse/HIVE-19441 > Project: Hive > Issue Type: Improvement > Components: Druid integration >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Attachments: HIVE-19441.2.patch, HIVE-19441.patch > > > Adding support to the float kind aggregator. > Use LLAP as test Driver to reduce execution time of tests from about 2 hours > to 15 min: > Although this patches unveiling an issue with timezone, maybe it is fixed by > [~jcamachorodriguez] upcoming set of patches. > > Before > {code} > [INFO] Executed tasks > [INFO] > [INFO] --- maven-compiler-plugin:3.6.1:testCompile (default-testCompile) @ > hive-it-qfile --- > [INFO] Compiling 21 source files to > /Users/sbouguerra/Hdev/hive/itests/qtest/target/test-classes > [INFO] > [INFO] --- maven-surefire-plugin:2.21.0:test (default-test) @ hive-it-qfile > --- > [INFO] > [INFO] --- > [INFO] T E S T S > [INFO] --- > [INFO] Running org.apache.hadoop.hive.cli.TestMiniDruidCliDriver > [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: > 6,654.117 s - in org.apache.hadoop.hive.cli.TestMiniDruidCliDriver > [INFO] > [INFO] Results: > [INFO] > [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0 > [INFO] > [INFO] > > [INFO] BUILD SUCCESS > [INFO] > > [INFO] Total time: 01:51 h > [INFO] Finished at: 2018-05-04T12:43:19-07:00 > [INFO] > > {code} > After > {code} > INFO] Executed tasks > [INFO] > [INFO] --- maven-compiler-plugin:3.6.1:testCompile (default-testCompile) @ > hive-it-qfile --- > [INFO] Compiling 22 source files to > /Users/sbouguerra/Hdev/hive/itests/qtest/target/test-classes > [INFO] > [INFO] --- maven-surefire-plugin:2.21.0:test (default-test) @ hive-it-qfile > --- > [INFO] > [INFO] --- > [INFO] T E S T S > [INFO] --- > [INFO] Running org.apache.hadoop.hive.cli.TestMiniDruidCliDriver > [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: > 907.167 s - in org.apache.hadoop.hive.cli.TestMiniDruidCliDriver > [INFO] > [INFO] Results: > [INFO] > [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0 > [INFO] > [INFO] > > [INFO] BUILD SUCCESS > [INFO] > > [INFO] Total time: 15:31 min > [INFO] Finished at: 2018-05-04T13:15:11-07:00 > [INFO] > > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19490) Locking on Insert into for non native and managed tables.
[ https://issues.apache.org/jira/browse/HIVE-19490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476637#comment-16476637 ] Hive QA commented on HIVE-19490: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12923371/HIVE-19490.3.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 14406 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10971/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10971/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10971/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12923371 - PreCommit-HIVE-Build > Locking on Insert into for non native and managed tables. > - > > Key: HIVE-19490 > URL: https://issues.apache.org/jira/browse/HIVE-19490 > Project: Hive > Issue Type: Improvement > Components: Druid integration >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Labels: druid, locking > Attachments: HIVE-19490.2.patch, HIVE-19490.3.patch, HIVE-19490.patch > > > Current state of the art: > Managed non native table like Druid Tables, will need to get a Lock on Insert > into or insert Over write. The nature of this lock is set to Exclusive by > default for any non native table. > This implies that Inserts into Druid table will Lock any read query as well > during the execution of the insert into. IMO this lock (on insert into) is > not needed since the insert statement is appending data and the state of > loading it is managed partially by Hive Storage handler hook and part of it > by Druid. > What i am proposing is to relax the lock level to shared for all non native > tables on insert into operations and keep it as Exclusive Write for insert > Overwrite for now. > > Any feedback is welcome. > cc [~ekoifman] / [~ashutoshc] / [~jdere] / [~hagleitn] > Also am not sure what is the best way to unit test this currently am using > debugger to check if locks are what i except, please let me know if there is > a better way to do this. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19490) Locking on Insert into for non native and managed tables.
[ https://issues.apache.org/jira/browse/HIVE-19490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476615#comment-16476615 ] Hive QA commented on HIVE-19490: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 40s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 35s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 58s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 24s{color} | {color:blue} druid-handler in master has 12 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 55s{color} | {color:blue} ql in master has 2320 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 15s{color} | {color:red} druid-handler in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} druid-handler: The patch generated 0 new + 44 unchanged - 12 fixed = 44 total (was 56) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 44s{color} | {color:red} ql: The patch generated 24 new + 1067 unchanged - 8 fixed = 1091 total (was 1075) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 25m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10971/dev-support/hive-personality.sh | | git revision | master / bcf4072 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | mvninstall | http://104.198.109.242/logs//PreCommit-HIVE-Build-10971/yetus/patch-mvninstall-druid-handler.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-10971/yetus/diff-checkstyle-ql.txt | | modules | C: druid-handler ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10971/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Locking on Insert into for non native and managed tables. > - > > Key: HIVE-19490 > URL: https://issues.apache.org/jira/browse/HIVE-19490 > Project: Hive > Issue Type: Improvement > Components: Druid integration >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Labels: druid, locking > Attachments: HIVE-19490.2.patch, HIVE-19490.3.patch, HIVE-19490.patch > > > Current state of the art: > Managed non native table like Druid Tables, will need to get a Lock on Insert > into or insert Over write. The nature of this lock is set to Exclusive by > default for
[jira] [Updated] (HIVE-19370) Issue: ADD Months function on timestamp datatype fields in hive
[ https://issues.apache.org/jira/browse/HIVE-19370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharathkrishna Guruvayoor Murali updated HIVE-19370: Attachment: HIVE-19370.02.patch > Issue: ADD Months function on timestamp datatype fields in hive > --- > > Key: HIVE-19370 > URL: https://issues.apache.org/jira/browse/HIVE-19370 > Project: Hive > Issue Type: Bug >Reporter: Amit Chauhan >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-19370.01.patch, HIVE-19370.02.patch > > > *Issue:* > while using ADD_Months function on a timestamp datatype column the output > omits the time part[HH:MM:SS] part from output. > which should not be the case. > *query:* EMAIL_FAILURE_DTMZ is of datatype timestamp in hive. > hive> select CUSTOMER_ID,EMAIL_FAILURE_DTMZ,ADD_MONTHS (EMAIL_FAILURE_DTMZ , > 1) from TABLE1 where CUSTOMER_ID=125674937; > OK > 125674937 2015-12-09 12:25:53 2016-01-09 > *hiver version :* > hive> !hive --version; > Hive 1.2.1000.2.5.6.0-40 > > can you please help if somehow I can get below as output: > > 125674937 2015-12-09 12:25:53 2016-01-09 12:25:53 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19250) Schema column definitions inconsistencies in MySQL
[ https://issues.apache.org/jira/browse/HIVE-19250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19250: --- Fix Version/s: (was: 3.0.0) > Schema column definitions inconsistencies in MySQL > -- > > Key: HIVE-19250 > URL: https://issues.apache.org/jira/browse/HIVE-19250 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 2.1.0 >Reporter: Naveen Gangam >Assignee: Naveen Gangam >Priority: Minor > Fix For: 3.1.0 > > Attachments: HIVE-19250.patch, HIVE-19250.patch > > > There are some inconsistencies in column definitions in MySQL between a > schema that was upgraded to 2.1 (from an older release) vs installing the > 2.1.0 schema directly. > > `CQ_TBLPROPERTIES` varchar(2048) DEFAULT NULL, > 117d117 > < `CQ_TBLPROPERTIES` varchar(2048) DEFAULT NULL, > 135a136 > > `CC_TBLPROPERTIES` varchar(2048) DEFAULT NULL, > 143d143 > < `CC_TBLPROPERTIES` varchar(2048) DEFAULT NULL, > 156c156 > < `CTC_TXNID` bigint(20) DEFAULT NULL, > --- > > `CTC_TXNID` bigint(20) NOT NULL, > 158c158 > < `CTC_TABLE` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT > NULL, > --- > > `CTC_TABLE` varchar(256) DEFAULT NULL, > 476c476 > < `TBL_NAME` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT > NULL, > --- > > `TBL_NAME` varchar(256) DEFAULT NULL, > 664c664 > < KEY `PCS_STATS_IDX` > (`DB_NAME`,`TABLE_NAME`,`COLUMN_NAME`,`PARTITION_NAME`), > --- > > KEY `PCS_STATS_IDX` > > (`DB_NAME`,`TABLE_NAME`,`COLUMN_NAME`,`PARTITION_NAME`) USING BTREE, > 768c768 > < `PARAM_VALUE` mediumtext, > --- > > `PARAM_VALUE` mediumtext CHARACTER SET latin1 COLLATE latin1_bin, > 814c814 > < `PARAM_VALUE` mediumtext, > --- > > `PARAM_VALUE` mediumtext CHARACTER SET latin1 COLLATE latin1_bin, > 934c934 > < `PARAM_VALUE` mediumtext, > --- > > `PARAM_VALUE` mediumtext CHARACTER SET latin1 COLLATE latin1_bin, > 1066d1065 > < `TXN_HEARTBEAT_COUNT` int(11) DEFAULT NULL, > 1067a1067 > > `TXN_HEARTBEAT_COUNT` int(11) DEFAULT NULL, > 1080c1080 > < `TC_TXNID` bigint(20) DEFAULT NULL, > --- > > `TC_TXNID` bigint(20) NOT NULL, > 1082c1082 > < `TC_TABLE` varchar(128) DEFAULT NULL, > --- > > `TC_TABLE` varchar(128) NOT NULL, > 1084c1084 > < `TC_OPERATION_TYPE` char(1) DEFAULT NULL, > --- > > `TC_OPERATION_TYPE` char(1) NOT NULL, -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19531) TransactionalValidationListener is getting catalog name from conf instead of table object.
[ https://issues.apache.org/jira/browse/HIVE-19531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476589#comment-16476589 ] Hive QA commented on HIVE-19531: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12923362/HIVE-19531.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 14404 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[date_1] (batchId=83) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[union_stats] (batchId=159) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat] (batchId=183) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10969/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10969/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10969/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12923362 - PreCommit-HIVE-Build > TransactionalValidationListener is getting catalog name from conf instead of > table object. > -- > > Key: HIVE-19531 > URL: https://issues.apache.org/jira/browse/HIVE-19531 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Blocker > Fix For: 3.0.1 > > Attachments: HIVE-19531.patch > > > TransactionalValidationListener.validateTableStructure get the catalog from > the conf file rather than taking it from the passed in table structure. This > causes createTable operations to fail. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19516) TestNegative merge_negative_5 and mm_concatenate are causing timeouts
[ https://issues.apache.org/jira/browse/HIVE-19516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476576#comment-16476576 ] Sergey Shelukhin commented on HIVE-19516: - Verified that both tests hang in DDLTask.compact; the test files are no longer valid - instead of failing, concatenate for transactional tables now triggers a compaction. Deleting the tests. [~vgarg] can you take a look? > TestNegative merge_negative_5 and mm_concatenate are causing timeouts > - > > Key: HIVE-19516 > URL: https://issues.apache.org/jira/browse/HIVE-19516 > Project: Hive > Issue Type: Bug > Components: Testing Infrastructure >Reporter: Vineet Garg >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19516.patch > > > I haven't tried to reproduce this in isolation but it is reproducible if you > run in batch on local system > {noformat} > mvn -B test -Dtest.groups= -Dtest=TestNegativeCliDriver >
[jira] [Updated] (HIVE-19516) TestNegative merge_negative_5 and mm_concatenate are causing timeouts
[ https://issues.apache.org/jira/browse/HIVE-19516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19516: Status: Patch Available (was: Open) > TestNegative merge_negative_5 and mm_concatenate are causing timeouts > - > > Key: HIVE-19516 > URL: https://issues.apache.org/jira/browse/HIVE-19516 > Project: Hive > Issue Type: Bug > Components: Testing Infrastructure >Reporter: Vineet Garg >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19516.patch > > > I haven't tried to reproduce this in isolation but it is reproducible if you > run in batch on local system > {noformat} > mvn -B test -Dtest.groups= -Dtest=TestNegativeCliDriver >
[jira] [Updated] (HIVE-19516) TestNegative merge_negative_5 and mm_concatenate are causing timeouts
[ https://issues.apache.org/jira/browse/HIVE-19516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19516: Attachment: HIVE-19516.patch > TestNegative merge_negative_5 and mm_concatenate are causing timeouts > - > > Key: HIVE-19516 > URL: https://issues.apache.org/jira/browse/HIVE-19516 > Project: Hive > Issue Type: Bug > Components: Testing Infrastructure >Reporter: Vineet Garg >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19516.patch > > > I haven't tried to reproduce this in isolation but it is reproducible if you > run in batch on local system > {noformat} > mvn -B test -Dtest.groups= -Dtest=TestNegativeCliDriver >
[jira] [Assigned] (HIVE-19516) TestNegative merge_negative_5 and mm_concatenate are causing timeouts
[ https://issues.apache.org/jira/browse/HIVE-19516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-19516: --- Assignee: Sergey Shelukhin > TestNegative merge_negative_5 and mm_concatenate are causing timeouts > - > > Key: HIVE-19516 > URL: https://issues.apache.org/jira/browse/HIVE-19516 > Project: Hive > Issue Type: Bug > Components: Testing Infrastructure >Reporter: Vineet Garg >Assignee: Sergey Shelukhin >Priority: Major > > I haven't tried to reproduce this in isolation but it is reproducible if you > run in batch on local system > {noformat} > mvn -B test -Dtest.groups= -Dtest=TestNegativeCliDriver >
[jira] [Updated] (HIVE-19491) Branch-3 Start using storage-api 2.6.1 once available.
[ https://issues.apache.org/jira/browse/HIVE-19491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19491: --- Resolution: Fixed Status: Resolved (was: Patch Available) This is pushed to branch-3 > Branch-3 Start using storage-api 2.6.1 once available. > -- > > Key: HIVE-19491 > URL: https://issues.apache.org/jira/browse/HIVE-19491 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Vineet Garg >Priority: Blocker > Fix For: 3.0.0 > > Attachments: HIVE-19491.01-branch-3.patch > > > branch-3 needs storage-api 2.6.1 which is in the process of being released. > > cc. [~vgarg] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19491) Branch-3 Start using storage-api 2.6.1 once available.
[ https://issues.apache.org/jira/browse/HIVE-19491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476559#comment-16476559 ] Deepak Jaiswal commented on HIVE-19491: --- So this patch is already committed? > Branch-3 Start using storage-api 2.6.1 once available. > -- > > Key: HIVE-19491 > URL: https://issues.apache.org/jira/browse/HIVE-19491 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Vineet Garg >Priority: Blocker > Fix For: 3.0.0 > > Attachments: HIVE-19491.01-branch-3.patch > > > branch-3 needs storage-api 2.6.1 which is in the process of being released. > > cc. [~vgarg] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19531) TransactionalValidationListener is getting catalog name from conf instead of table object.
[ https://issues.apache.org/jira/browse/HIVE-19531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476558#comment-16476558 ] Hive QA commented on HIVE-19531: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 44s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 11s{color} | {color:blue} standalone-metastore in master has 215 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 18s{color} | {color:red} standalone-metastore: The patch generated 1 new + 19 unchanged - 0 fixed = 20 total (was 19) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 19m 51s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10969/dev-support/hive-personality.sh | | git revision | master / d04db94 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-10969/yetus/diff-checkstyle-standalone-metastore.txt | | modules | C: standalone-metastore U: standalone-metastore | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10969/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > TransactionalValidationListener is getting catalog name from conf instead of > table object. > -- > > Key: HIVE-19531 > URL: https://issues.apache.org/jira/browse/HIVE-19531 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates >Priority: Blocker > Fix For: 3.0.1 > > Attachments: HIVE-19531.patch > > > TransactionalValidationListener.validateTableStructure get the catalog from > the conf file rather than taking it from the passed in table structure. This > causes createTable operations to fail. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19491) Branch-3 Start using storage-api 2.6.1 once available.
[ https://issues.apache.org/jira/browse/HIVE-19491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476551#comment-16476551 ] Vineet Garg commented on HIVE-19491: [~djaiswal] changes for standalone-metastore is already pushed in here https://github.com/apache/hive/commit/04ad5d1790a5cdf91fce2b830d587e864c366bfc. This patch isn't relevant anymore. Regarding storage-api what change are you suggesting? > Branch-3 Start using storage-api 2.6.1 once available. > -- > > Key: HIVE-19491 > URL: https://issues.apache.org/jira/browse/HIVE-19491 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Vineet Garg >Priority: Blocker > Fix For: 3.0.0 > > Attachments: HIVE-19491.01-branch-3.patch > > > branch-3 needs storage-api 2.6.1 which is in the process of being released. > > cc. [~vgarg] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HIVE-19561) Update README.md to update requirements for Hadoop and RELEASE_NOTES
[ https://issues.apache.org/jira/browse/HIVE-19561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg resolved HIVE-19561. Resolution: Fixed Pushed to master > Update README.md to update requirements for Hadoop and RELEASE_NOTES > > > Key: HIVE-19561 > URL: https://issues.apache.org/jira/browse/HIVE-19561 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19561.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19491) Branch-3 Start using storage-api 2.6.1 once available.
[ https://issues.apache.org/jira/browse/HIVE-19491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476542#comment-16476542 ] Deepak Jaiswal commented on HIVE-19491: --- [~vgarg] can you please update pom.xml of standalone-metastore and storage-api as well? > Branch-3 Start using storage-api 2.6.1 once available. > -- > > Key: HIVE-19491 > URL: https://issues.apache.org/jira/browse/HIVE-19491 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Vineet Garg >Priority: Blocker > Fix For: 3.0.0 > > Attachments: HIVE-19491.01-branch-3.patch > > > branch-3 needs storage-api 2.6.1 which is in the process of being released. > > cc. [~vgarg] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19417) Modify metastore to have/access persistent tables for stats
[ https://issues.apache.org/jira/browse/HIVE-19417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476532#comment-16476532 ] Sergey Shelukhin commented on HIVE-19417: - Minor comments; overall looks good pending tests. I think [~ekoifman] should review table structures, there seems to be some duplication with ACID tables, it seems to be ok to me. The inserts for the new stuff (Update, and setting the table update_id) will probably need to be in the same DB transaction as the commit of Hive ACID transaction. > Modify metastore to have/access persistent tables for stats > --- > > Key: HIVE-19417 > URL: https://issues.apache.org/jira/browse/HIVE-19417 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Attachments: HIVE-19417.01.patch, HIVE-19417.02.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18079) Statistics: Allow HyperLogLog to be merged to the lowest-common-denominator bit-size
[ https://issues.apache.org/jira/browse/HIVE-18079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476531#comment-16476531 ] Hive QA commented on HIVE-18079: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12923356/HIVE-18079.13.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10968/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10968/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10968/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-05-15 21:59:00.520 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-10968/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-05-15 21:59:00.522 + cd apache-github-source-source + git fetch origin >From https://github.com/apache/hive 6e6b0cb..d04db94 master -> origin/master 3ea0356..efe9ab8 branch-3 -> origin/branch-3 + git reset --hard HEAD HEAD is now at 6e6b0cb HIVE-19496: Check untar folder (Aihua Xu, reviewed by Sahil Takiar) + git clean -f -d + git checkout master Already on 'master' Your branch is behind 'origin/master' by 3 commits, and can be fast-forwarded. (use "git pull" to update your local branch) + git reset --hard origin/master HEAD is now at d04db94 HIVE-19307: Support ArrowOutputStream in LlapOutputFormatService (Eric Wohlstadter, reviewed by Jason Dere) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-05-15 21:59:02.403 + rm -rf ../yetus_PreCommit-HIVE-Build-10968 + mkdir ../yetus_PreCommit-HIVE-Build-10968 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-10968 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-10968/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: a/ql/src/test/queries/clientpositive/bucket_map_join_tez2.q: does not exist in index error: a/ql/src/test/queries/clientpositive/explainuser_4.q: does not exist in index error: a/ql/src/test/queries/clientpositive/tez_vector_dynpart_hashjoin_1.q: does not exist in index error: a/ql/src/test/results/clientpositive/autoColumnStats_2.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/autoColumnStats_9.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/bitvector.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/compute_stats_date.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/confirm_initial_tbl_stats.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/cross_join_merge.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/describe_table.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/encrypted/encryption_move_tbl.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/hll.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/llap/acid_no_buckets.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/llap/autoColumnStats_2.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/llap/auto_join1.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/llap/auto_join21.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/llap/auto_join29.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/llap/auto_join30.q.out: does not exist in index error: a/ql/src/test/results/clientpositive/llap/auto_sortmerge_join_6.q.out: does not exist in index error:
[jira] [Updated] (HIVE-19421) Upgrade version of Jetty to 9.3.20.v20170531
[ https://issues.apache.org/jira/browse/HIVE-19421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Janaki Lahorani updated HIVE-19421: --- Attachment: HIVE-19421.4.patch > Upgrade version of Jetty to 9.3.20.v20170531 > > > Key: HIVE-19421 > URL: https://issues.apache.org/jira/browse/HIVE-19421 > Project: Hive > Issue Type: Improvement >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Attachments: HIVE-19421.1.patch, HIVE-19421.2.patch, > HIVE-19421.3.patch, HIVE-19421.3.patch, HIVE-19421.3.patch, HIVE-19421.4.patch > > > Move Jetty up to 9.3.20.v20170531 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18117) Create TestCliDriver for HDFS EC
[ https://issues.apache.org/jira/browse/HIVE-18117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476530#comment-16476530 ] Hive QA commented on HIVE-18117: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12923522/HIVE-18117.5.patch {color:green}SUCCESS:{color} +1 due to 5 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 3 failed/errored test(s), 14407 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[union26] (batchId=68) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_bmj_schema_evolution] (batchId=154) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.dropNoSuchDatabase[Remote] (batchId=211) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10967/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10967/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10967/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 3 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12923522 - PreCommit-HIVE-Build > Create TestCliDriver for HDFS EC > > > Key: HIVE-18117 > URL: https://issues.apache.org/jira/browse/HIVE-18117 > Project: Hive > Issue Type: Sub-task >Reporter: Sahil Takiar >Assignee: Andrew Sherman >Priority: Major > Attachments: HIVE-18117.1.patch, HIVE-18117.2.patch, > HIVE-18117.3.patch, HIVE-18117.4.patch, HIVE-18117.5.patch > > > Should be able to do something similar to what we do for HDFS encryption. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19555) Enable TestMiniLlapLocalCliDriver#tez_dynpart_hashjoin_1.q and TestMiniLlapLocalCliDriver#tez_vector_dynpart_hashjoin_1.q
[ https://issues.apache.org/jira/browse/HIVE-19555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-19555: -- Attachment: HIVE-19555.1.patch > Enable TestMiniLlapLocalCliDriver#tez_dynpart_hashjoin_1.q and > TestMiniLlapLocalCliDriver#tez_vector_dynpart_hashjoin_1.q > - > > Key: HIVE-19555 > URL: https://issues.apache.org/jira/browse/HIVE-19555 > Project: Hive > Issue Type: Test > Components: Test >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Priority: Critical > Attachments: HIVE-19555.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19555) Enable TestMiniLlapLocalCliDriver#tez_dynpart_hashjoin_1.q and TestMiniLlapLocalCliDriver#tez_vector_dynpart_hashjoin_1.q
[ https://issues.apache.org/jira/browse/HIVE-19555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-19555: -- Assignee: Jason Dere Status: Patch Available (was: Open) > Enable TestMiniLlapLocalCliDriver#tez_dynpart_hashjoin_1.q and > TestMiniLlapLocalCliDriver#tez_vector_dynpart_hashjoin_1.q > - > > Key: HIVE-19555 > URL: https://issues.apache.org/jira/browse/HIVE-19555 > Project: Hive > Issue Type: Test > Components: Test >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jason Dere >Priority: Critical > Attachments: HIVE-19555.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19562) Flaky test: TestMiniSparkOnYarn FileNotFoundException in spark-submit
[ https://issues.apache.org/jira/browse/HIVE-19562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476516#comment-16476516 ] Sahil Takiar commented on HIVE-19562: - [~pvary], [~vihangk1] can you take a look? > Flaky test: TestMiniSparkOnYarn FileNotFoundException in spark-submit > - > > Key: HIVE-19562 > URL: https://issues.apache.org/jira/browse/HIVE-19562 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-19562.1.patch > > > Seeing sporadic failures during test setup. Specifically, when spark-submit > runs this error (or a similar error) gets thrown: > {code} > 2018-05-15T10:55:02,112 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: Exception in thread "main" > java.io.FileNotFoundException: File > file:/tmp/spark-56e217f7-b8a5-4c63-9a6b-d737a64f2820/__spark_libs__7371510645900072447.zip > does not exist > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:442) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:365) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:316) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.copyFileToRemote(Client.scala:356) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.org$apache$spark$deploy$yarn$Client$$distribute$1(Client.scala:478) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:565) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:863) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:169) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.run(Client.scala:1146) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1518) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] >
[jira] [Commented] (HIVE-19562) Flaky test: TestMiniSparkOnYarn FileNotFoundException in spark-submit
[ https://issues.apache.org/jira/browse/HIVE-19562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476515#comment-16476515 ] Sahil Takiar commented on HIVE-19562: - {{spark.local.dir}} is also used in use in a bunch of other places, and it defaults to {{/tmp}} in Spark. So updated its value in the other Spark CliDrivers too. Should decrease flakiness for the HoS tests. > Flaky test: TestMiniSparkOnYarn FileNotFoundException in spark-submit > - > > Key: HIVE-19562 > URL: https://issues.apache.org/jira/browse/HIVE-19562 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-19562.1.patch > > > Seeing sporadic failures during test setup. Specifically, when spark-submit > runs this error (or a similar error) gets thrown: > {code} > 2018-05-15T10:55:02,112 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: Exception in thread "main" > java.io.FileNotFoundException: File > file:/tmp/spark-56e217f7-b8a5-4c63-9a6b-d737a64f2820/__spark_libs__7371510645900072447.zip > does not exist > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:442) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:365) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:316) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.copyFileToRemote(Client.scala:356) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.org$apache$spark$deploy$yarn$Client$$distribute$1(Client.scala:478) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:565) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:863) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:169) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.run(Client.scala:1146) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1518) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at >
[jira] [Assigned] (HIVE-19565) Vectorization: Fix NULL / Wrong Results issues in STRING Functions
[ https://issues.apache.org/jira/browse/HIVE-19565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline reassigned HIVE-19565: --- > Vectorization: Fix NULL / Wrong Results issues in STRING Functions > -- > > Key: HIVE-19565 > URL: https://issues.apache.org/jira/browse/HIVE-19565 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > > Write new UT tests that use random data and intentional isRepeating batches > to checks for NULL and Wrong Results for vectorized STRING functions: > * char_length > * concat > * initcap > * length > * lower > * ltrim > * octet_length > * regexp > * rtrim > * trim > * upper > * UDF: > ** hex > ** like > ** substr -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19250) Schema column definitions inconsistencies in MySQL
[ https://issues.apache.org/jira/browse/HIVE-19250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476514#comment-16476514 ] Vineet Garg commented on HIVE-19250: [~ngangam] We are preparing RC for 3.0 and branch-3 is closed for commits so I am reverting your commit from branch-3. > Schema column definitions inconsistencies in MySQL > -- > > Key: HIVE-19250 > URL: https://issues.apache.org/jira/browse/HIVE-19250 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 2.1.0 >Reporter: Naveen Gangam >Assignee: Naveen Gangam >Priority: Minor > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-19250.patch, HIVE-19250.patch > > > There are some inconsistencies in column definitions in MySQL between a > schema that was upgraded to 2.1 (from an older release) vs installing the > 2.1.0 schema directly. > > `CQ_TBLPROPERTIES` varchar(2048) DEFAULT NULL, > 117d117 > < `CQ_TBLPROPERTIES` varchar(2048) DEFAULT NULL, > 135a136 > > `CC_TBLPROPERTIES` varchar(2048) DEFAULT NULL, > 143d143 > < `CC_TBLPROPERTIES` varchar(2048) DEFAULT NULL, > 156c156 > < `CTC_TXNID` bigint(20) DEFAULT NULL, > --- > > `CTC_TXNID` bigint(20) NOT NULL, > 158c158 > < `CTC_TABLE` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT > NULL, > --- > > `CTC_TABLE` varchar(256) DEFAULT NULL, > 476c476 > < `TBL_NAME` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT > NULL, > --- > > `TBL_NAME` varchar(256) DEFAULT NULL, > 664c664 > < KEY `PCS_STATS_IDX` > (`DB_NAME`,`TABLE_NAME`,`COLUMN_NAME`,`PARTITION_NAME`), > --- > > KEY `PCS_STATS_IDX` > > (`DB_NAME`,`TABLE_NAME`,`COLUMN_NAME`,`PARTITION_NAME`) USING BTREE, > 768c768 > < `PARAM_VALUE` mediumtext, > --- > > `PARAM_VALUE` mediumtext CHARACTER SET latin1 COLLATE latin1_bin, > 814c814 > < `PARAM_VALUE` mediumtext, > --- > > `PARAM_VALUE` mediumtext CHARACTER SET latin1 COLLATE latin1_bin, > 934c934 > < `PARAM_VALUE` mediumtext, > --- > > `PARAM_VALUE` mediumtext CHARACTER SET latin1 COLLATE latin1_bin, > 1066d1065 > < `TXN_HEARTBEAT_COUNT` int(11) DEFAULT NULL, > 1067a1067 > > `TXN_HEARTBEAT_COUNT` int(11) DEFAULT NULL, > 1080c1080 > < `TC_TXNID` bigint(20) DEFAULT NULL, > --- > > `TC_TXNID` bigint(20) NOT NULL, > 1082c1082 > < `TC_TABLE` varchar(128) DEFAULT NULL, > --- > > `TC_TABLE` varchar(128) NOT NULL, > 1084c1084 > < `TC_OPERATION_TYPE` char(1) DEFAULT NULL, > --- > > `TC_OPERATION_TYPE` char(1) NOT NULL, -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18652) Print Spark metrics on console
[ https://issues.apache.org/jira/browse/HIVE-18652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-18652: Attachment: HIVE-18652.4.patch > Print Spark metrics on console > -- > > Key: HIVE-18652 > URL: https://issues.apache.org/jira/browse/HIVE-18652 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-18652.1.patch, HIVE-18652.2.patch, > HIVE-18652.3.patch, HIVE-18652.4.patch > > > For Hive-on-MR, each MR job launched prints out some stats about the job: > {code} > INFO : 2018-02-07 17:51:11,218 Stage-1 map = 0%, reduce = 0% > INFO : 2018-02-07 17:51:18,396 Stage-1 map = 100%, reduce = 0%, Cumulative > CPU 1.87 sec > INFO : 2018-02-07 17:51:25,742 Stage-1 map = 100%, reduce = 100%, > Cumulative CPU 4.34 sec > INFO : MapReduce Total cumulative CPU time: 4 seconds 340 msec > INFO : Ended Job = job_1517865654989_0004 > INFO : MapReduce Jobs Launched: > INFO : Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 4.34 sec HDFS > Read: 7353 HDFS Write: 151 SUCCESS > INFO : Total MapReduce CPU Time Spent: 4 seconds 340 msec > {code} > We should do the same for Hive-on-Spark. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19564) Vectorization: Fix NULL / Wrong Results issues in Functions
[ https://issues.apache.org/jira/browse/HIVE-19564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline reassigned HIVE-19564: --- > Vectorization: Fix NULL / Wrong Results issues in Functions > --- > > Key: HIVE-19564 > URL: https://issues.apache.org/jira/browse/HIVE-19564 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > > Write new UT tests that use random data and intentional isRepeating batches > to checks for NULL and Wrong Results for vectorized functions: > * Generic UDF Functions > ** abs > ** bround > ** ceiling > ** floor > ** pmod > ** power > ** round > * UDF Functions > ** Acos > ** Asin > ** Atan > ** Bin > ** Cos > ** Degrees > ** Exp > ** Ln > ** Log > ** log10 > ** log2 > ** radians > ** rand > ** sign > ** sin > ** sqrt > ** tan -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19307) Support ArrowOutputStream in LlapOutputFormatService
[ https://issues.apache.org/jira/browse/HIVE-19307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-19307: -- Resolution: Fixed Status: Resolved (was: Patch Available) Committed to master > Support ArrowOutputStream in LlapOutputFormatService > > > Key: HIVE-19307 > URL: https://issues.apache.org/jira/browse/HIVE-19307 > Project: Hive > Issue Type: Task > Components: llap >Reporter: Eric Wohlstadter >Assignee: Eric Wohlstadter >Priority: Major > Attachments: HIVE-19307.3.patch, HIVE-19307.4.patch, > HIVE-19307.5.patch, HIVE-19307.6.patch, HIVE-19307.7.patch, HIVE-19307.8.patch > > > Support pushing arrow batches through > org.apache.arrow.vector.ipc.ArrowOutputStream in LllapOutputFormatService. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19307) Support ArrowOutputStream in LlapOutputFormatService
[ https://issues.apache.org/jira/browse/HIVE-19307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-19307: -- Fix Version/s: 3.1.0 > Support ArrowOutputStream in LlapOutputFormatService > > > Key: HIVE-19307 > URL: https://issues.apache.org/jira/browse/HIVE-19307 > Project: Hive > Issue Type: New Feature > Components: llap >Reporter: Eric Wohlstadter >Assignee: Eric Wohlstadter >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19307.3.patch, HIVE-19307.4.patch, > HIVE-19307.5.patch, HIVE-19307.6.patch, HIVE-19307.7.patch, HIVE-19307.8.patch > > > Support pushing arrow batches through > org.apache.arrow.vector.ipc.ArrowOutputStream in LllapOutputFormatService. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19307) Support ArrowOutputStream in LlapOutputFormatService
[ https://issues.apache.org/jira/browse/HIVE-19307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-19307: -- Issue Type: New Feature (was: Task) > Support ArrowOutputStream in LlapOutputFormatService > > > Key: HIVE-19307 > URL: https://issues.apache.org/jira/browse/HIVE-19307 > Project: Hive > Issue Type: New Feature > Components: llap >Reporter: Eric Wohlstadter >Assignee: Eric Wohlstadter >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19307.3.patch, HIVE-19307.4.patch, > HIVE-19307.5.patch, HIVE-19307.6.patch, HIVE-19307.7.patch, HIVE-19307.8.patch > > > Support pushing arrow batches through > org.apache.arrow.vector.ipc.ArrowOutputStream in LllapOutputFormatService. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19258) add originals support to MM tables (and make the conversion a metadata only operation)
[ https://issues.apache.org/jira/browse/HIVE-19258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19258: Attachment: HIVE-19258.08.patch > add originals support to MM tables (and make the conversion a metadata only > operation) > -- > > Key: HIVE-19258 > URL: https://issues.apache.org/jira/browse/HIVE-19258 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19258.01.patch, HIVE-19258.02.patch, > HIVE-19258.03.patch, HIVE-19258.04.patch, HIVE-19258.05.patch, > HIVE-19258.06.patch, HIVE-19258.07.patch, HIVE-19258.08.patch, > HIVE-19258.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19258) add originals support to MM tables (and make the conversion a metadata only operation)
[ https://issues.apache.org/jira/browse/HIVE-19258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476505#comment-16476505 ] Sergey Shelukhin commented on HIVE-19258: - Fixed the orc_merge7 issue, a bug introduced into FetchOperator > add originals support to MM tables (and make the conversion a metadata only > operation) > -- > > Key: HIVE-19258 > URL: https://issues.apache.org/jira/browse/HIVE-19258 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Attachments: HIVE-19258.01.patch, HIVE-19258.02.patch, > HIVE-19258.03.patch, HIVE-19258.04.patch, HIVE-19258.05.patch, > HIVE-19258.06.patch, HIVE-19258.07.patch, HIVE-19258.08.patch, > HIVE-19258.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HIVE-19563) Flaky test: TestMiniLlapLocalCliDriver.tez_vector_dynpart_hashjoin_1
[ https://issues.apache.org/jira/browse/HIVE-19563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez resolved HIVE-19563. Resolution: Duplicate > Flaky test: TestMiniLlapLocalCliDriver.tez_vector_dynpart_hashjoin_1 > > > Key: HIVE-19563 > URL: https://issues.apache.org/jira/browse/HIVE-19563 > Project: Hive > Issue Type: Sub-task > Components: Tests >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > > {noformat} > Client Execution succeeded but contained differences (error code = 1) after > executing tez_vector_dynpart_hashjoin_1.q > 407c407 > < -13036 1 > --- > > -8915 1 > 410c410 > < -8915 1 > --- > > -13036 1 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18117) Create TestCliDriver for HDFS EC
[ https://issues.apache.org/jira/browse/HIVE-18117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476498#comment-16476498 ] Hive QA commented on HIVE-18117: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 42s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 46s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 26s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 28s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 18s{color} | {color:blue} shims/common in master has 6 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 20s{color} | {color:blue} shims/0.23 in master has 7 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 41s{color} | {color:blue} itests/util in master has 55 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 3m 50s{color} | {color:blue} ql in master has 2320 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 25s{color} | {color:blue} testutils/ptest2 in master has 24 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 29s{color} | {color:red} qtest in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s{color} | {color:green} The patch common passed checkstyle {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 9s{color} | {color:red} shims/0.23: The patch generated 1 new + 69 unchanged - 0 fixed = 70 total (was 69) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s{color} | {color:red} itests/qtest: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} The patch util passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} ql: The patch generated 0 new + 76 unchanged - 30 fixed = 76 total (was 106) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 7s{color} | {color:green} The patch ptest2 passed checkstyle {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 3 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 35m 28s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10967/dev-support/hive-personality.sh | | git revision | master / 6e6b0cb | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | mvninstall |
[jira] [Commented] (HIVE-19563) Flaky test: TestMiniLlapLocalCliDriver.tez_vector_dynpart_hashjoin_1
[ https://issues.apache.org/jira/browse/HIVE-19563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476495#comment-16476495 ] Jesus Camacho Rodriguez commented on HIVE-19563: Disabled in https://github.com/apache/hive/commit/ff446b77961b50b84b1698eeed71dab35db10d52. Tracking in HIVE-19555 too together with TestMiniLlapLocalCliDriver#tez_dynpart_hashjoin_1.q, since they are similar. Closing as duplicate. > Flaky test: TestMiniLlapLocalCliDriver.tez_vector_dynpart_hashjoin_1 > > > Key: HIVE-19563 > URL: https://issues.apache.org/jira/browse/HIVE-19563 > Project: Hive > Issue Type: Sub-task > Components: Tests >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > > {noformat} > Client Execution succeeded but contained differences (error code = 1) after > executing tez_vector_dynpart_hashjoin_1.q > 407c407 > < -13036 1 > --- > > -8915 1 > 410c410 > < -8915 1 > --- > > -13036 1 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19562) Flaky test: TestMiniSparkOnYarn FileNotFoundException in spark-submit
[ https://issues.apache.org/jira/browse/HIVE-19562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-19562: Attachment: HIVE-19562.1.patch > Flaky test: TestMiniSparkOnYarn FileNotFoundException in spark-submit > - > > Key: HIVE-19562 > URL: https://issues.apache.org/jira/browse/HIVE-19562 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-19562.1.patch > > > Seeing sporadic failures during test setup. Specifically, when spark-submit > runs this error (or a similar error) gets thrown: > {code} > 2018-05-15T10:55:02,112 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: Exception in thread "main" > java.io.FileNotFoundException: File > file:/tmp/spark-56e217f7-b8a5-4c63-9a6b-d737a64f2820/__spark_libs__7371510645900072447.zip > does not exist > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:442) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:365) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:316) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.copyFileToRemote(Client.scala:356) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.org$apache$spark$deploy$yarn$Client$$distribute$1(Client.scala:478) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:565) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:863) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:169) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.run(Client.scala:1146) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1518) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at >
[jira] [Updated] (HIVE-19562) Flaky test: TestMiniSparkOnYarn FileNotFoundException in spark-submit
[ https://issues.apache.org/jira/browse/HIVE-19562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-19562: Status: Patch Available (was: Open) > Flaky test: TestMiniSparkOnYarn FileNotFoundException in spark-submit > - > > Key: HIVE-19562 > URL: https://issues.apache.org/jira/browse/HIVE-19562 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-19562.1.patch > > > Seeing sporadic failures during test setup. Specifically, when spark-submit > runs this error (or a similar error) gets thrown: > {code} > 2018-05-15T10:55:02,112 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: Exception in thread "main" > java.io.FileNotFoundException: File > file:/tmp/spark-56e217f7-b8a5-4c63-9a6b-d737a64f2820/__spark_libs__7371510645900072447.zip > does not exist > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:442) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:365) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:316) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.copyFileToRemote(Client.scala:356) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.org$apache$spark$deploy$yarn$Client$$distribute$1(Client.scala:478) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:565) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:863) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:169) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.run(Client.scala:1146) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1518) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at >
[jira] [Updated] (HIVE-19560) Retry test runner and retry rule for flaky tests
[ https://issues.apache.org/jira/browse/HIVE-19560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-19560: --- Resolution: Fixed Fix Version/s: 3.1.0 Status: Resolved (was: Patch Available) Pushed to master, thanks [~prasanth_j] > Retry test runner and retry rule for flaky tests > > > Key: HIVE-19560 > URL: https://issues.apache.org/jira/browse/HIVE-19560 > Project: Hive > Issue Type: Improvement > Components: Test >Affects Versions: 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19560.1.patch > > > Implement custom test runner that retries failed tests as a workaround for > flakiness. Also a test rule for retrying failed tests (for cases where custom > test runner is not possible, e.g ParametrizedTests which already is a > customer TestRunner). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19563) Flaky test: TestMiniLlapLocalCliDriver.tez_vector_dynpart_hashjoin_1
[ https://issues.apache.org/jira/browse/HIVE-19563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere reassigned HIVE-19563: - Assignee: Jason Dere > Flaky test: TestMiniLlapLocalCliDriver.tez_vector_dynpart_hashjoin_1 > > > Key: HIVE-19563 > URL: https://issues.apache.org/jira/browse/HIVE-19563 > Project: Hive > Issue Type: Sub-task > Components: Tests >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > > {noformat} > Client Execution succeeded but contained differences (error code = 1) after > executing tez_vector_dynpart_hashjoin_1.q > 407c407 > < -13036 1 > --- > > -8915 1 > 410c410 > < -8915 1 > --- > > -13036 1 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19563) Flaky test: TestMiniLlapLocalCliDriver.tez_vector_dynpart_hashjoin_1
[ https://issues.apache.org/jira/browse/HIVE-19563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476489#comment-16476489 ] Jason Dere commented on HIVE-19563: --- Looks like a couple of the queries are doing order-by on the wrong column > Flaky test: TestMiniLlapLocalCliDriver.tez_vector_dynpart_hashjoin_1 > > > Key: HIVE-19563 > URL: https://issues.apache.org/jira/browse/HIVE-19563 > Project: Hive > Issue Type: Sub-task > Components: Tests >Reporter: Jason Dere >Priority: Major > > {noformat} > Client Execution succeeded but contained differences (error code = 1) after > executing tez_vector_dynpart_hashjoin_1.q > 407c407 > < -13036 1 > --- > > -8915 1 > 410c410 > < -8915 1 > --- > > -13036 1 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19317) Handle schema evolution from int like types to decimal
[ https://issues.apache.org/jira/browse/HIVE-19317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476486#comment-16476486 ] Vihang Karajgaonkar commented on HIVE-19317: +1 LGTM. > Handle schema evolution from int like types to decimal > -- > > Key: HIVE-19317 > URL: https://issues.apache.org/jira/browse/HIVE-19317 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Attachments: HIVE-19317.1.patch, HIVE-19317.2.patch, > HIVE-19317.3.patch, HIVE-19317.4.patch, HIVE-19317.5.patch > > > If int like type is changed to decimal on parquet data, select results in > errors. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19250) Schema column definitions inconsistencies in MySQL
[ https://issues.apache.org/jira/browse/HIVE-19250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Naveen Gangam updated HIVE-19250: - Resolution: Fixed Fix Version/s: 3.1.0 3.0.0 Status: Resolved (was: Patch Available) > Schema column definitions inconsistencies in MySQL > -- > > Key: HIVE-19250 > URL: https://issues.apache.org/jira/browse/HIVE-19250 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 2.1.0 >Reporter: Naveen Gangam >Assignee: Naveen Gangam >Priority: Minor > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-19250.patch, HIVE-19250.patch > > > There are some inconsistencies in column definitions in MySQL between a > schema that was upgraded to 2.1 (from an older release) vs installing the > 2.1.0 schema directly. > > `CQ_TBLPROPERTIES` varchar(2048) DEFAULT NULL, > 117d117 > < `CQ_TBLPROPERTIES` varchar(2048) DEFAULT NULL, > 135a136 > > `CC_TBLPROPERTIES` varchar(2048) DEFAULT NULL, > 143d143 > < `CC_TBLPROPERTIES` varchar(2048) DEFAULT NULL, > 156c156 > < `CTC_TXNID` bigint(20) DEFAULT NULL, > --- > > `CTC_TXNID` bigint(20) NOT NULL, > 158c158 > < `CTC_TABLE` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT > NULL, > --- > > `CTC_TABLE` varchar(256) DEFAULT NULL, > 476c476 > < `TBL_NAME` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT > NULL, > --- > > `TBL_NAME` varchar(256) DEFAULT NULL, > 664c664 > < KEY `PCS_STATS_IDX` > (`DB_NAME`,`TABLE_NAME`,`COLUMN_NAME`,`PARTITION_NAME`), > --- > > KEY `PCS_STATS_IDX` > > (`DB_NAME`,`TABLE_NAME`,`COLUMN_NAME`,`PARTITION_NAME`) USING BTREE, > 768c768 > < `PARAM_VALUE` mediumtext, > --- > > `PARAM_VALUE` mediumtext CHARACTER SET latin1 COLLATE latin1_bin, > 814c814 > < `PARAM_VALUE` mediumtext, > --- > > `PARAM_VALUE` mediumtext CHARACTER SET latin1 COLLATE latin1_bin, > 934c934 > < `PARAM_VALUE` mediumtext, > --- > > `PARAM_VALUE` mediumtext CHARACTER SET latin1 COLLATE latin1_bin, > 1066d1065 > < `TXN_HEARTBEAT_COUNT` int(11) DEFAULT NULL, > 1067a1067 > > `TXN_HEARTBEAT_COUNT` int(11) DEFAULT NULL, > 1080c1080 > < `TC_TXNID` bigint(20) DEFAULT NULL, > --- > > `TC_TXNID` bigint(20) NOT NULL, > 1082c1082 > < `TC_TABLE` varchar(128) DEFAULT NULL, > --- > > `TC_TABLE` varchar(128) NOT NULL, > 1084c1084 > < `TC_OPERATION_TYPE` char(1) DEFAULT NULL, > --- > > `TC_OPERATION_TYPE` char(1) NOT NULL, -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19250) Schema column definitions inconsistencies in MySQL
[ https://issues.apache.org/jira/browse/HIVE-19250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476483#comment-16476483 ] Naveen Gangam commented on HIVE-19250: -- Fix has been pushed to master and branch-3. Thank you [~aihuaxu] for the review. > Schema column definitions inconsistencies in MySQL > -- > > Key: HIVE-19250 > URL: https://issues.apache.org/jira/browse/HIVE-19250 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 2.1.0 >Reporter: Naveen Gangam >Assignee: Naveen Gangam >Priority: Minor > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-19250.patch, HIVE-19250.patch > > > There are some inconsistencies in column definitions in MySQL between a > schema that was upgraded to 2.1 (from an older release) vs installing the > 2.1.0 schema directly. > > `CQ_TBLPROPERTIES` varchar(2048) DEFAULT NULL, > 117d117 > < `CQ_TBLPROPERTIES` varchar(2048) DEFAULT NULL, > 135a136 > > `CC_TBLPROPERTIES` varchar(2048) DEFAULT NULL, > 143d143 > < `CC_TBLPROPERTIES` varchar(2048) DEFAULT NULL, > 156c156 > < `CTC_TXNID` bigint(20) DEFAULT NULL, > --- > > `CTC_TXNID` bigint(20) NOT NULL, > 158c158 > < `CTC_TABLE` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT > NULL, > --- > > `CTC_TABLE` varchar(256) DEFAULT NULL, > 476c476 > < `TBL_NAME` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT > NULL, > --- > > `TBL_NAME` varchar(256) DEFAULT NULL, > 664c664 > < KEY `PCS_STATS_IDX` > (`DB_NAME`,`TABLE_NAME`,`COLUMN_NAME`,`PARTITION_NAME`), > --- > > KEY `PCS_STATS_IDX` > > (`DB_NAME`,`TABLE_NAME`,`COLUMN_NAME`,`PARTITION_NAME`) USING BTREE, > 768c768 > < `PARAM_VALUE` mediumtext, > --- > > `PARAM_VALUE` mediumtext CHARACTER SET latin1 COLLATE latin1_bin, > 814c814 > < `PARAM_VALUE` mediumtext, > --- > > `PARAM_VALUE` mediumtext CHARACTER SET latin1 COLLATE latin1_bin, > 934c934 > < `PARAM_VALUE` mediumtext, > --- > > `PARAM_VALUE` mediumtext CHARACTER SET latin1 COLLATE latin1_bin, > 1066d1065 > < `TXN_HEARTBEAT_COUNT` int(11) DEFAULT NULL, > 1067a1067 > > `TXN_HEARTBEAT_COUNT` int(11) DEFAULT NULL, > 1080c1080 > < `TC_TXNID` bigint(20) DEFAULT NULL, > --- > > `TC_TXNID` bigint(20) NOT NULL, > 1082c1082 > < `TC_TABLE` varchar(128) DEFAULT NULL, > --- > > `TC_TABLE` varchar(128) NOT NULL, > 1084c1084 > < `TC_OPERATION_TYPE` char(1) DEFAULT NULL, > --- > > `TC_OPERATION_TYPE` char(1) NOT NULL, -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19562) Flaky test: TestMiniSparkOnYarn FileNotFoundException in spark-submit
[ https://issues.apache.org/jira/browse/HIVE-19562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar reassigned HIVE-19562: --- > Flaky test: TestMiniSparkOnYarn FileNotFoundException in spark-submit > - > > Key: HIVE-19562 > URL: https://issues.apache.org/jira/browse/HIVE-19562 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > > Seeing sporadic failures during test setup. Specifically, when spark-submit > runs this error (or a similar error) gets thrown: > {code} > 2018-05-15T10:55:02,112 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: Exception in thread "main" > java.io.FileNotFoundException: File > file:/tmp/spark-56e217f7-b8a5-4c63-9a6b-d737a64f2820/__spark_libs__7371510645900072447.zip > does not exist > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:641) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:867) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:631) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:442) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:365) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:316) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.copyFileToRemote(Client.scala:356) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.org$apache$spark$deploy$yarn$Client$$distribute$1(Client.scala:478) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:565) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:863) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:169) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.Client.run(Client.scala:1146) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1518) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227) > 2018-05-15T10:55:02,113 INFO > [RemoteDriver-stderr-redir-27d3dcfb-2a10-4118-9fae-c200d2e095a5 main] > client.SparkSubmitSparkClient: at > org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136) > 2018-05-15T10:55:02,113 INFO >
[jira] [Commented] (HIVE-19250) Schema column definitions inconsistencies in MySQL
[ https://issues.apache.org/jira/browse/HIVE-19250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476465#comment-16476465 ] Naveen Gangam commented on HIVE-19250: -- Forgot to post this on Monday. I had looked at the test failures and seemed unrelated. Some of them are infra failures with no test files. Prior builds also had mostly same failures. I do not expect schema changes of this nature to cause these failures. So +1 for me. > Schema column definitions inconsistencies in MySQL > -- > > Key: HIVE-19250 > URL: https://issues.apache.org/jira/browse/HIVE-19250 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 2.1.0 >Reporter: Naveen Gangam >Assignee: Naveen Gangam >Priority: Minor > Attachments: HIVE-19250.patch, HIVE-19250.patch > > > There are some inconsistencies in column definitions in MySQL between a > schema that was upgraded to 2.1 (from an older release) vs installing the > 2.1.0 schema directly. > > `CQ_TBLPROPERTIES` varchar(2048) DEFAULT NULL, > 117d117 > < `CQ_TBLPROPERTIES` varchar(2048) DEFAULT NULL, > 135a136 > > `CC_TBLPROPERTIES` varchar(2048) DEFAULT NULL, > 143d143 > < `CC_TBLPROPERTIES` varchar(2048) DEFAULT NULL, > 156c156 > < `CTC_TXNID` bigint(20) DEFAULT NULL, > --- > > `CTC_TXNID` bigint(20) NOT NULL, > 158c158 > < `CTC_TABLE` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT > NULL, > --- > > `CTC_TABLE` varchar(256) DEFAULT NULL, > 476c476 > < `TBL_NAME` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT > NULL, > --- > > `TBL_NAME` varchar(256) DEFAULT NULL, > 664c664 > < KEY `PCS_STATS_IDX` > (`DB_NAME`,`TABLE_NAME`,`COLUMN_NAME`,`PARTITION_NAME`), > --- > > KEY `PCS_STATS_IDX` > > (`DB_NAME`,`TABLE_NAME`,`COLUMN_NAME`,`PARTITION_NAME`) USING BTREE, > 768c768 > < `PARAM_VALUE` mediumtext, > --- > > `PARAM_VALUE` mediumtext CHARACTER SET latin1 COLLATE latin1_bin, > 814c814 > < `PARAM_VALUE` mediumtext, > --- > > `PARAM_VALUE` mediumtext CHARACTER SET latin1 COLLATE latin1_bin, > 934c934 > < `PARAM_VALUE` mediumtext, > --- > > `PARAM_VALUE` mediumtext CHARACTER SET latin1 COLLATE latin1_bin, > 1066d1065 > < `TXN_HEARTBEAT_COUNT` int(11) DEFAULT NULL, > 1067a1067 > > `TXN_HEARTBEAT_COUNT` int(11) DEFAULT NULL, > 1080c1080 > < `TC_TXNID` bigint(20) DEFAULT NULL, > --- > > `TC_TXNID` bigint(20) NOT NULL, > 1082c1082 > < `TC_TABLE` varchar(128) DEFAULT NULL, > --- > > `TC_TABLE` varchar(128) NOT NULL, > 1084c1084 > < `TC_OPERATION_TYPE` char(1) DEFAULT NULL, > --- > > `TC_OPERATION_TYPE` char(1) NOT NULL, -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19440) Make StorageBasedAuthorizer work with information schema
[ https://issues.apache.org/jira/browse/HIVE-19440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476456#comment-16476456 ] Daniel Dai commented on HIVE-19440: --- HIVE-19440.5.patch address Thejas' review comments and retarget to Hive 3.1.0 > Make StorageBasedAuthorizer work with information schema > > > Key: HIVE-19440 > URL: https://issues.apache.org/jira/browse/HIVE-19440 > Project: Hive > Issue Type: Improvement >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Blocker > Fix For: 3.1.0 > > Attachments: HIVE-19440.1.patch, HIVE-19440.2.patch, > HIVE-19440.3.patch, HIVE-19440.4.patch, HIVE-19440.5.patch > > > With HIVE-19161, Hive information schema works with external authorizer (such > as ranger). However, we also need to make StorageBasedAuthorizer > synchronization work as it is also widely use. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19440) Make StorageBasedAuthorizer work with information schema
[ https://issues.apache.org/jira/browse/HIVE-19440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-19440: -- Attachment: HIVE-19440.5.patch > Make StorageBasedAuthorizer work with information schema > > > Key: HIVE-19440 > URL: https://issues.apache.org/jira/browse/HIVE-19440 > Project: Hive > Issue Type: Improvement >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Blocker > Fix For: 3.1.0 > > Attachments: HIVE-19440.1.patch, HIVE-19440.2.patch, > HIVE-19440.3.patch, HIVE-19440.4.patch, HIVE-19440.5.patch > > > With HIVE-19161, Hive information schema works with external authorizer (such > as ranger). However, we also need to make StorageBasedAuthorizer > synchronization work as it is also widely use. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19561) Update README.md to update requirements for Hadoop and RELEASE_NOTES
[ https://issues.apache.org/jira/browse/HIVE-19561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19561: --- Summary: Update README.md to update requirements for Hadoop and RELEASE_NOTES (was: Update README.md to update requirements for Hadoop) > Update README.md to update requirements for Hadoop and RELEASE_NOTES > > > Key: HIVE-19561 > URL: https://issues.apache.org/jira/browse/HIVE-19561 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19561.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19561) Update README.md to update requirements for Hadoop and RELEASE_NOTES
[ https://issues.apache.org/jira/browse/HIVE-19561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476452#comment-16476452 ] Vineet Garg commented on HIVE-19561: Also updated release notes. I will commit this in master and then cherry-pick this in branch-3 > Update README.md to update requirements for Hadoop and RELEASE_NOTES > > > Key: HIVE-19561 > URL: https://issues.apache.org/jira/browse/HIVE-19561 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19561.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19561) Update README.md to update requirements for Hadoop
[ https://issues.apache.org/jira/browse/HIVE-19561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19561: --- Attachment: HIVE-19561.1.patch > Update README.md to update requirements for Hadoop > -- > > Key: HIVE-19561 > URL: https://issues.apache.org/jira/browse/HIVE-19561 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-19561.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19561) Update README.md to update requirements for Hadoop
[ https://issues.apache.org/jira/browse/HIVE-19561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19561: --- Attachment: (was: HIVE-19561.1.patch) > Update README.md to update requirements for Hadoop > -- > > Key: HIVE-19561 > URL: https://issues.apache.org/jira/browse/HIVE-19561 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19317) Handle schema evolution from int like types to decimal
[ https://issues.apache.org/jira/browse/HIVE-19317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16476450#comment-16476450 ] Hive QA commented on HIVE-19317: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12923343/HIVE-19317.5.patch {color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 14404 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[mm_all] (batchId=152) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat] (batchId=183) org.apache.hive.hcatalog.pig.TestSequenceFileHCatStorer.testWriteDate2 (batchId=196) org.apache.hive.hcatalog.pig.TestTextFileHCatStorer.testWriteTimestamp (batchId=196) org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill (batchId=240) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10966/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10966/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10966/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 5 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12923343 - PreCommit-HIVE-Build > Handle schema evolution from int like types to decimal > -- > > Key: HIVE-19317 > URL: https://issues.apache.org/jira/browse/HIVE-19317 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Attachments: HIVE-19317.1.patch, HIVE-19317.2.patch, > HIVE-19317.3.patch, HIVE-19317.4.patch, HIVE-19317.5.patch > > > If int like type is changed to decimal on parquet data, select results in > errors. -- This message was sent by Atlassian JIRA (v7.6.3#76005)