[jira] [Updated] (HIVE-21864) LlapBaseInputFormat#closeAll() throws ConcurrentModificationException
[ https://issues.apache.org/jira/browse/HIVE-21864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shubham Chaurasia updated HIVE-21864: - Attachment: HIVE-21864.5.patch > LlapBaseInputFormat#closeAll() throws ConcurrentModificationException > - > > Key: HIVE-21864 > URL: https://issues.apache.org/jira/browse/HIVE-21864 > Project: Hive > Issue Type: Bug > Components: llap >Affects Versions: 3.1.1 >Reporter: Shubham Chaurasia >Assignee: Shubham Chaurasia >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21864.1.patch, HIVE-21864.2.patch, > HIVE-21864.3.patch, HIVE-21864.4.patch, HIVE-21864.5.patch > > Time Spent: 1h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21783) Avoid authentication for connection from the same domain
[ https://issues.apache.org/jira/browse/HIVE-21783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864927#comment-16864927 ] Prasanth Jayachandran commented on HIVE-21783: -- Had to revert the patch couple of times as I missed to update the author of the patch. Merging the PR also was not showing the author correctly. Fixed it now. > Avoid authentication for connection from the same domain > > > Key: HIVE-21783 > URL: https://issues.apache.org/jira/browse/HIVE-21783 > Project: Hive > Issue Type: New Feature > Components: HiveServer2 >Reporter: Ashutosh Bapat >Assignee: Ashutosh Bapat >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-21783.01.patch, HIVE-21783.02.patch > > Time Spent: 4h 10m > Remaining Estimate: 0h > > When a connection comes from the same domain do not authenticate the user. > This is similar to NONE authentication but only for the connection from the > same domain. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21783) Avoid authentication for connection from the same domain
[ https://issues.apache.org/jira/browse/HIVE-21783?focusedWorklogId=260986=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-260986 ] ASF GitHub Bot logged work on HIVE-21783: - Author: ASF GitHub Bot Created on: 16/Jun/19 02:58 Start Date: 16/Jun/19 02:58 Worklog Time Spent: 10m Work Description: prasanthj commented on pull request #675: Revert "HIVE-21783: Accept Hive connections from the same domain without authentication." URL: https://github.com/apache/hive/pull/675 Reverts apache/hive#648 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 260986) Time Spent: 4h (was: 3h 50m) > Avoid authentication for connection from the same domain > > > Key: HIVE-21783 > URL: https://issues.apache.org/jira/browse/HIVE-21783 > Project: Hive > Issue Type: New Feature > Components: HiveServer2 >Reporter: Ashutosh Bapat >Assignee: Ashutosh Bapat >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-21783.01.patch, HIVE-21783.02.patch > > Time Spent: 4h > Remaining Estimate: 0h > > When a connection comes from the same domain do not authenticate the user. > This is similar to NONE authentication but only for the connection from the > same domain. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21783) Avoid authentication for connection from the same domain
[ https://issues.apache.org/jira/browse/HIVE-21783?focusedWorklogId=260987=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-260987 ] ASF GitHub Bot logged work on HIVE-21783: - Author: ASF GitHub Bot Created on: 16/Jun/19 02:58 Start Date: 16/Jun/19 02:58 Worklog Time Spent: 10m Work Description: prasanthj commented on pull request #675: Revert "HIVE-21783: Accept Hive connections from the same domain without authentication." URL: https://github.com/apache/hive/pull/675 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 260987) Time Spent: 4h 10m (was: 4h) > Avoid authentication for connection from the same domain > > > Key: HIVE-21783 > URL: https://issues.apache.org/jira/browse/HIVE-21783 > Project: Hive > Issue Type: New Feature > Components: HiveServer2 >Reporter: Ashutosh Bapat >Assignee: Ashutosh Bapat >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-21783.01.patch, HIVE-21783.02.patch > > Time Spent: 4h 10m > Remaining Estimate: 0h > > When a connection comes from the same domain do not authenticate the user. > This is similar to NONE authentication but only for the connection from the > same domain. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21783) Avoid authentication for connection from the same domain
[ https://issues.apache.org/jira/browse/HIVE-21783?focusedWorklogId=260985=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-260985 ] ASF GitHub Bot logged work on HIVE-21783: - Author: ASF GitHub Bot Created on: 16/Jun/19 02:57 Start Date: 16/Jun/19 02:57 Worklog Time Spent: 10m Work Description: prasanthj commented on pull request #648: HIVE-21783: Accept Hive connections from the same domain without authentication. URL: https://github.com/apache/hive/pull/648 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 260985) Time Spent: 3h 50m (was: 3h 40m) > Avoid authentication for connection from the same domain > > > Key: HIVE-21783 > URL: https://issues.apache.org/jira/browse/HIVE-21783 > Project: Hive > Issue Type: New Feature > Components: HiveServer2 >Reporter: Ashutosh Bapat >Assignee: Ashutosh Bapat >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-21783.01.patch, HIVE-21783.02.patch > > Time Spent: 3h 50m > Remaining Estimate: 0h > > When a connection comes from the same domain do not authenticate the user. > This is similar to NONE authentication but only for the connection from the > same domain. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21764) REPL DUMP should detect and bootstrap any rename table events where old table was excluded but renamed table is included.
[ https://issues.apache.org/jira/browse/HIVE-21764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-21764: --- Description: * REPL DUMP takes 2 inputs along with existing FROM and WITH clause. {code:java} - REPL DUMP [REPLACE FROM WITH ; - current_repl_policy and previous_repl_policy can be any format mentioned in Point-4. - REPLACE clause to be supported to take previous repl policy as input. If REPLACE clause is not there, then the policy remains unchanged. - Rest of the format remains same.{code} * Now, REPL DUMP on this DB will replicate the tables based on current_repl_policy. * Currently single table replication of format .t1 is not supported for table level replication. So it will be not be supported in replace clause also. * If any table is added dynamically either due to change in regular expression or added to include list should be bootstrapped using independent table level replication policy. {code:java} - Hive will automatically figure out the list of tables newly included in the list by comparing the current_repl_policy & previous_repl_policy inputs and combine bootstrap dump for added tables as part of incremental dump. "_bootstrap" directory can be created in dump dir to accommodate all tables to be bootstrapped. - If any table is renamed, then it may gets dynamically added/removed for replication based on defined replication policy + include/exclude list. So, Hive will perform bootstrap for the table which is just included after rename. - Tables added after the previous policy run and before replace policy, will be replicated using bootstrap if the table name satisfies inclusion in both the policy. The events generated for those tables will be ignored while dumping the events.{code} * REPL LOAD should check for changes in REPL policy and drop the tables/views excluded in the new policy compared to previous policy. It should be done before performing incremental and bootstrap load from the current dump. Both the policy will be stored in _bootstrap directory and will be used during REPL load to drop the redundant tables. * REPL LOAD on incremental dump should load events directories first and then check for "_bootstrap" directory and perform bootstrap load on them. Rename table is not in scope of this Jira. was: REPL DUMP fetches the events from NOTIFICATION_LOG table based on regular expression + inclusion/exclusion list. So, in case of rename table event, the event will be ignored if old table doesn't match the pattern but the new table should be bootstrapped. REPL DUMP should have a mechanism to detect such tables and automatically bootstrap with incremental replication. Also, if renamed table is excluded from replication policy, then need to drop the old table at target as well. > REPL DUMP should detect and bootstrap any rename table events where old table > was excluded but renamed table is included. > - > > Key: HIVE-21764 > URL: https://issues.apache.org/jira/browse/HIVE-21764 > Project: Hive > Issue Type: Sub-task > Components: repl >Reporter: Sankar Hariappan >Assignee: mahesh kumar behera >Priority: Major > Labels: DR, Replication > > * REPL DUMP takes 2 inputs along with existing FROM and WITH clause. > {code:java} > - REPL DUMP [REPLACE FROM > WITH ; > - current_repl_policy and previous_repl_policy can be any format mentioned in > Point-4. > - REPLACE clause to be supported to take previous repl policy as input. If > REPLACE clause is not there, then the policy remains unchanged. > - Rest of the format remains same.{code} > * Now, REPL DUMP on this DB will replicate the tables based on > current_repl_policy. > * Currently single table replication of format .t1 is not supported > for table level replication. So it will be not be supported in replace clause > also. > * If any table is added dynamically either due to change in regular > expression or added to include list should be bootstrapped using independent > table level replication policy. > {code:java} > - Hive will automatically figure out the list of tables newly included in the > list by comparing the current_repl_policy & previous_repl_policy inputs and > combine bootstrap dump for added tables as part of incremental dump. > "_bootstrap" directory can be created in dump dir to accommodate all tables > to be bootstrapped. > - If any table is renamed, then it may gets dynamically added/removed for > replication based on defined replication policy + include/exclude list. So, > Hive will perform bootstrap for the table which is just included after rename. > - Tables added after the previous policy run and before replace policy,
[jira] [Commented] (HIVE-21832) New metrics to get the average queue/serving/response time
[ https://issues.apache.org/jira/browse/HIVE-21832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864911#comment-16864911 ] Hive QA commented on HIVE-21832: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12971891/HIVE-21832.7.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 16154 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.ql.parse.TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites (batchId=246) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17594/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17594/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17594/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12971891 - PreCommit-HIVE-Build > New metrics to get the average queue/serving/response time > -- > > Key: HIVE-21832 > URL: https://issues.apache.org/jira/browse/HIVE-21832 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Peter Vary >Assignee: Antal Sinkovits >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21832.2.patch, HIVE-21832.3.patch, > HIVE-21832.4.patch, HIVE-21832.5.patch, HIVE-21832.6.patch, > HIVE-21832.7.patch, HIVE-21832.patch > > Time Spent: 40m > Remaining Estimate: 0h > > Simple > [DescriptiveStatistics|https://commons.apache.org/proper/commons-math/javadocs/api-3.6/src-html/org/apache/commons/math3/stat/descriptive/DescriptiveStatistics.html#line.60] > with window size would do here. Time is not important in this case. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21832) New metrics to get the average queue/serving/response time
[ https://issues.apache.org/jira/browse/HIVE-21832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864905#comment-16864905 ] Hive QA commented on HIVE-21832: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 56s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 47s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 37s{color} | {color:blue} common in master has 62 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 48s{color} | {color:blue} llap-server in master has 81 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 39s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 21s{color} | {color:red} llap-server in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 21s{color} | {color:red} llap-server in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 21s{color} | {color:red} llap-server in the patch failed. {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 15s{color} | {color:red} llap-server: The patch generated 2 new + 143 unchanged - 0 fixed = 145 total (was 143) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 19s{color} | {color:red} llap-server in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 17m 58s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-17594/dev-support/hive-personality.sh | | git revision | master / c6a2d79 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | mvninstall | http://104.198.109.242/logs//PreCommit-HIVE-Build-17594/yetus/patch-mvninstall-llap-server.txt | | compile | http://104.198.109.242/logs//PreCommit-HIVE-Build-17594/yetus/patch-compile-llap-server.txt | | javac | http://104.198.109.242/logs//PreCommit-HIVE-Build-17594/yetus/patch-compile-llap-server.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-17594/yetus/diff-checkstyle-llap-server.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-17594/yetus/patch-findbugs-llap-server.txt | | modules | C: common llap-server U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-17594/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > New metrics to get the average queue/serving/response time > -- > > Key: HIVE-21832 > URL: https://issues.apache.org/jira/browse/HIVE-21832 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Peter Vary >Assignee: Antal Sinkovits >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21832.2.patch, HIVE-21832.3.patch, > HIVE-21832.4.patch, HIVE-21832.5.patch, HIVE-21832.6.patch, >
[jira] [Updated] (HIVE-21832) New metrics to get the average queue/serving/response time
[ https://issues.apache.org/jira/browse/HIVE-21832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Antal Sinkovits updated HIVE-21832: --- Attachment: HIVE-21832.7.patch > New metrics to get the average queue/serving/response time > -- > > Key: HIVE-21832 > URL: https://issues.apache.org/jira/browse/HIVE-21832 > Project: Hive > Issue Type: Sub-task > Components: llap >Reporter: Peter Vary >Assignee: Antal Sinkovits >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21832.2.patch, HIVE-21832.3.patch, > HIVE-21832.4.patch, HIVE-21832.5.patch, HIVE-21832.6.patch, > HIVE-21832.7.patch, HIVE-21832.patch > > Time Spent: 40m > Remaining Estimate: 0h > > Simple > [DescriptiveStatistics|https://commons.apache.org/proper/commons-math/javadocs/api-3.6/src-html/org/apache/commons/math3/stat/descriptive/DescriptiveStatistics.html#line.60] > with window size would do here. Time is not important in this case. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21737) Upgrade Avro to version 1.9.0
[ https://issues.apache.org/jira/browse/HIVE-21737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864900#comment-16864900 ] Hive QA commented on HIVE-21737: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12971886/0001-HIVE-21737-Bump-Apache-Avro-to-1.9.0.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 96 failed/errored test(s), 16154 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_add_column2] (batchId=98) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_add_column3] (batchId=57) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_add_column] (batchId=28) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_add_column_extschema] (batchId=45) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_alter_table_update_columns] (batchId=35) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_charvarchar] (batchId=31) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_comments] (batchId=66) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_compression_enabled_native] (batchId=42) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_date] (batchId=10) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_decimal_native] (batchId=30) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_decimal_old] (batchId=51) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_deserialize_map_null] (batchId=40) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_historical_timestamp] (batchId=98) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_joins_native] (batchId=97) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_native] (batchId=59) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_partitioned_native] (batchId=6) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_schema_evolution_native] (batchId=61) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avro_timestamp] (batchId=32) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[avrocountemptytbl] (batchId=90) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_ppd_non_deterministic] (batchId=78) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_avro_partition_uniontype] (batchId=51) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_map_null] (batchId=93) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_deterministic_expr] (batchId=20) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_orc_acid_part_update] (batchId=180) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[schema_evol_orc_acid_part_update_llap_io] (batchId=182) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[avro_decimal] (batchId=101) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[avro_compression_enabled_native] (batchId=129) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[avro_decimal_native] (batchId=125) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[avro_joins_native] (batchId=153) org.apache.hadoop.hive.serde2.avro.TestTypeInfoToSchema.createAvroBinarySchema (batchId=347) org.apache.hadoop.hive.serde2.avro.TestTypeInfoToSchema.createAvroBooleanSchema (batchId=347) org.apache.hadoop.hive.serde2.avro.TestTypeInfoToSchema.createAvroBytesSchema (batchId=347) org.apache.hadoop.hive.serde2.avro.TestTypeInfoToSchema.createAvroCharSchema (batchId=347) org.apache.hadoop.hive.serde2.avro.TestTypeInfoToSchema.createAvroDateSchema (batchId=347) org.apache.hadoop.hive.serde2.avro.TestTypeInfoToSchema.createAvroDecimalSchema (batchId=347) org.apache.hadoop.hive.serde2.avro.TestTypeInfoToSchema.createAvroDoubleSchema (batchId=347) org.apache.hadoop.hive.serde2.avro.TestTypeInfoToSchema.createAvroFloatSchema (batchId=347) org.apache.hadoop.hive.serde2.avro.TestTypeInfoToSchema.createAvroIntSchema (batchId=347) org.apache.hadoop.hive.serde2.avro.TestTypeInfoToSchema.createAvroListSchema (batchId=347) org.apache.hadoop.hive.serde2.avro.TestTypeInfoToSchema.createAvroLongSchema (batchId=347) org.apache.hadoop.hive.serde2.avro.TestTypeInfoToSchema.createAvroMapSchema (batchId=347) org.apache.hadoop.hive.serde2.avro.TestTypeInfoToSchema.createAvroNestedStructSchema (batchId=347) org.apache.hadoop.hive.serde2.avro.TestTypeInfoToSchema.createAvroShortSchema (batchId=347) org.apache.hadoop.hive.serde2.avro.TestTypeInfoToSchema.createAvroStringSchema (batchId=347) org.apache.hadoop.hive.serde2.avro.TestTypeInfoToSchema.createAvroStructSchema (batchId=347) org.apache.hadoop.hive.serde2.avro.TestTypeInfoToSchema.createAvroTimestampSchema (batchId=347) org.apache.hadoop.hive.serde2.avro.TestTypeInfoToSchema.createAvroUnionSchema (batchId=347)
[jira] [Commented] (HIVE-21737) Upgrade Avro to version 1.9.0
[ https://issues.apache.org/jira/browse/HIVE-21737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864899#comment-16864899 ] Hive QA commented on HIVE-21737: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 57s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 39s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 25s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 18s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 43s{color} | {color:blue} serde in master has 193 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 32s{color} | {color:blue} llap-common in master has 76 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 16s{color} | {color:blue} ql in master has 2260 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 38s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} serde: The patch generated 0 new + 34 unchanged - 1 fixed = 34 total (was 35) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} The patch llap-common passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} ql: The patch generated 0 new + 1 unchanged - 3 fixed = 1 total (was 4) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 5s{color} | {color:green} root: The patch generated 0 new + 42 unchanged - 4 fixed = 42 total (was 46) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 72m 30s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile xml | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-17593/dev-support/hive-personality.sh | | git revision | master / c6a2d79 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: serde llap-common ql . U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-17593/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Upgrade Avro to version 1.9.0 > - > > Key: HIVE-21737 > URL: https://issues.apache.org/jira/browse/HIVE-21737 > Project: Hive > Issue Type: Improvement > Components: Hive >Reporter: Ismaël Mejía >Assignee:
[jira] [Commented] (HIVE-21814) Implement list partitions related methods on temporary tables
[ https://issues.apache.org/jira/browse/HIVE-21814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864875#comment-16864875 ] Hive QA commented on HIVE-21814: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12971885/HIVE-21814.02.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 16292 tests executed *Failed tests:* {noformat} TestMiniTezCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=111) [explainanalyze_4.q,acid_vectorization_original_tez.q,orc_merge12.q,tez_union_with_udf.q] org.apache.hadoop.hive.ql.TestTxnExIm.testExportPart (batchId=322) org.apache.hadoop.hive.ql.TestTxnExIm.testExportPartPartial (batchId=322) org.apache.hadoop.hive.ql.TestTxnExIm.testExportPartPartial2 (batchId=322) org.apache.hadoop.hive.ql.TestTxnExIm.testExportPartPartial3 (batchId=322) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17592/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17592/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17592/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 5 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12971885 - PreCommit-HIVE-Build > Implement list partitions related methods on temporary tables > - > > Key: HIVE-21814 > URL: https://issues.apache.org/jira/browse/HIVE-21814 > Project: Hive > Issue Type: Sub-task >Reporter: Laszlo Pinter >Assignee: Laszlo Pinter >Priority: Major > Attachments: HIVE-21814.01.patch, HIVE-21814.02.patch > > > IMetaStoreClient exposes the following methods related to listing partitions: > {code:java} > List listPartitionNames(String db_name, String tbl_name, List > part_vals, short max_parts); > List listPartitionNames(String catName, String db_name, String > tbl_name, List part_vals, int max_parts); > List listPartitions(String db_name, String tbl_name, List > part_vals, short max_parts); > List listPartitions(String catName, String db_name, String > tbl_name, List part_vals, int max_parts); > List listPartitionNames(String db_name, String tbl_name, short > max_parts); > List listPartitionNames(String catName, String db_name, String > tbl_name, int max_parts); > List listPartitionNames(String db_name, String tbl_name, List > part_vals, short max_parts); > List listPartitionNames(String catName, String db_name, String > tbl_name, List part_vals, int max_parts); > PartitionSpecProxy listPartitionSpecs(String dbName, String tableName, int > maxParts); > PartitionSpecProxy listPartitionSpecs(String catName, String dbName, String > tableName,int maxParts); > List listPartitionsWithAuthInfo(String dbName, String tableName, > List partialPvals, short maxParts, String userName, List > groupNames); > List listPartitionsWithAuthInfo(String catName, String dbName, > String tableName, List partialPvals, int maxParts, String userName, > List groupNames); > {code} > In order to support partitions on temporary tables, the majority of these > methods must be implemented in SessionHiveMetastoreClient. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21814) Implement list partitions related methods on temporary tables
[ https://issues.apache.org/jira/browse/HIVE-21814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864871#comment-16864871 ] Hive QA commented on HIVE-21814: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 58s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 15s{color} | {color:blue} standalone-metastore/metastore-server in master has 184 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 11s{color} | {color:blue} ql in master has 2260 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 32m 37s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-17592/dev-support/hive-personality.sh | | git revision | master / c6a2d79 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: standalone-metastore/metastore-server ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-17592/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Implement list partitions related methods on temporary tables > - > > Key: HIVE-21814 > URL: https://issues.apache.org/jira/browse/HIVE-21814 > Project: Hive > Issue Type: Sub-task >Reporter: Laszlo Pinter >Assignee: Laszlo Pinter >Priority: Major > Attachments: HIVE-21814.01.patch, HIVE-21814.02.patch > > > IMetaStoreClient exposes the following methods related to listing partitions: > {code:java} > List listPartitionNames(String db_name, String tbl_name, List > part_vals, short max_parts); > List listPartitionNames(String catName, String db_name, String > tbl_name, List part_vals, int max_parts); > List listPartitions(String db_name, String tbl_name, List > part_vals, short max_parts); > List listPartitions(String catName, String db_name, String > tbl_name, List part_vals, int max_parts); > List listPartitionNames(String db_name, String tbl_name, short > max_parts); > List listPartitionNames(String catName, String db_name, String > tbl_name, int max_parts); > List listPartitionNames(String db_name, String tbl_name, List
[jira] [Commented] (HIVE-21763) Incremental replication to allow changing include/exclude tables list in replication policy.
[ https://issues.apache.org/jira/browse/HIVE-21763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864849#comment-16864849 ] Hive QA commented on HIVE-21763: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12971884/HIVE-21763.01.patch {color:green}SUCCESS:{color} +1 due to 3 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 16155 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.ql.parse.TestReplAcidTablesBootstrapWithJsonMessage.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites (batchId=248) org.apache.hadoop.hive.ql.parse.TestReplicationScenariosAcidTablesBootstrap.testBootstrapAcidTablesDuringIncrementalWithConcurrentWrites (batchId=246) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17591/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17591/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17591/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12971884 - PreCommit-HIVE-Build > Incremental replication to allow changing include/exclude tables list in > replication policy. > > > Key: HIVE-21763 > URL: https://issues.apache.org/jira/browse/HIVE-21763 > Project: Hive > Issue Type: Sub-task > Components: repl >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, Replication, pull-request-available > Attachments: HIVE-21763.01.patch > > Time Spent: 10m > Remaining Estimate: 0h > > - REPL DUMP takes 2 inputs along with existing FROM and WITH clause. > {code} > - REPL DUMP [REPLACE FROM > WITH ; > - current_repl_policy and previous_repl_policy can be any format mentioned in > Point-4. > - REPLACE clause to be supported to take previous repl policy as input. If > REPLACE clause is not there, then the policy remains unchanged. > - Rest of the format remains same. > {code} > - Now, REPL DUMP on this DB will replicate the tables based on > current_repl_policy. > - Single table replication of format .t1 doesn’t allow changing the > policy dynamically. So REPLACE clause is not allowed if previous_repl_policy > of this format. > - If any table is added dynamically either due to change in regular > expression or added to include list should be bootstrapped using independant > table level replication policy. > {code} > - Hive will automatically figure out the list of tables newly included in the > list by comparing the current_repl_policy & previous_repl_policy inputs and > combine bootstrap dump for added tables as part of incremental dump. > "_bootstrap" directory can be created in dump dir to accommodate all tables > to be bootstrapped. > - If any table is renamed, then it may gets dynamically added/removed for > replication based on defined replication policy + include/exclude list. So, > Hive will perform bootstrap for the table which is just included after rename. > {code} > - REPL LOAD should check for changes in repl policy and drop the tables/views > excluded in the new policy compared to previous policy. It should be done > before performing incremental and bootstrap load from the current dump. > - REPL LOAD on incremental dump should load events directories first and then > check for "_bootstrap" directory and perform bootstrap load on them. > Rename table is not in scope of this jira. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21737) Upgrade Avro to version 1.9.0
[ https://issues.apache.org/jira/browse/HIVE-21737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fokko Driesprong updated HIVE-21737: Attachment: 0001-HIVE-21737-Bump-Apache-Avro-to-1.9.0.patch Status: Patch Available (was: In Progress) > Upgrade Avro to version 1.9.0 > - > > Key: HIVE-21737 > URL: https://issues.apache.org/jira/browse/HIVE-21737 > Project: Hive > Issue Type: Improvement > Components: Hive >Reporter: Ismaël Mejía >Assignee: Fokko Driesprong >Priority: Minor > Labels: pull-request-available > Attachments: 0001-HIVE-21737-Bump-Apache-Avro-to-1.9.0.patch > > Time Spent: 10m > Remaining Estimate: 0h > > Avro 1.9.0 was released recently. It brings a lot of fixes including a leaner > version of Avro without Jackson in the public API. Worth the update. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21737) Upgrade Avro to version 1.9.0
[ https://issues.apache.org/jira/browse/HIVE-21737?focusedWorklogId=260943=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-260943 ] ASF GitHub Bot logged work on HIVE-21737: - Author: ASF GitHub Bot Created on: 15/Jun/19 20:00 Start Date: 15/Jun/19 20:00 Worklog Time Spent: 10m Work Description: Fokko commented on pull request #674: HIVE-21737: Bump Apache Avro to 1.9.0 URL: https://github.com/apache/hive/pull/674 Apache Avro 1.9.0 brings a lot of new features: * Deprecate Joda-Time in favor of Java8 JSR310 and setting it as default * Remove support for Hadoop 1.x * Move from Jackson 1.x to 2.9 * Add ZStandard Codec * Lots of updates on the dependencies to fix CVE's * Remove Jackson classes from public API * Apache Avro is built by default with Java 8 * Apache Avro is compiled and tested with Java 11 to guarantee compatibility * Apache Avro MapReduce is compiled and tested with Hadoop 3 * Apache Avro is now leaner, multiple dependencies were removed: guava, paranamer, commons-codec, and commons-logging * and many, many more! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 260943) Time Spent: 10m Remaining Estimate: 0h > Upgrade Avro to version 1.9.0 > - > > Key: HIVE-21737 > URL: https://issues.apache.org/jira/browse/HIVE-21737 > Project: Hive > Issue Type: Improvement > Components: Hive >Reporter: Ismaël Mejía >Assignee: Fokko Driesprong >Priority: Minor > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Avro 1.9.0 was released recently. It brings a lot of fixes including a leaner > version of Avro without Jackson in the public API. Worth the update. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21737) Upgrade Avro to version 1.9.0
[ https://issues.apache.org/jira/browse/HIVE-21737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-21737: -- Labels: pull-request-available (was: ) > Upgrade Avro to version 1.9.0 > - > > Key: HIVE-21737 > URL: https://issues.apache.org/jira/browse/HIVE-21737 > Project: Hive > Issue Type: Improvement > Components: Hive >Reporter: Ismaël Mejía >Assignee: Fokko Driesprong >Priority: Minor > Labels: pull-request-available > > Avro 1.9.0 was released recently. It brings a lot of fixes including a leaner > version of Avro without Jackson in the public API. Worth the update. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work started] (HIVE-21737) Upgrade Avro to version 1.9.0
[ https://issues.apache.org/jira/browse/HIVE-21737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-21737 started by Fokko Driesprong. --- > Upgrade Avro to version 1.9.0 > - > > Key: HIVE-21737 > URL: https://issues.apache.org/jira/browse/HIVE-21737 > Project: Hive > Issue Type: Improvement > Components: Hive >Reporter: Ismaël Mejía >Assignee: Fokko Driesprong >Priority: Minor > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Avro 1.9.0 was released recently. It brings a lot of fixes including a leaner > version of Avro without Jackson in the public API. Worth the update. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-21737) Upgrade Avro to version 1.9.0
[ https://issues.apache.org/jira/browse/HIVE-21737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fokko Driesprong reassigned HIVE-21737: --- Assignee: Fokko Driesprong > Upgrade Avro to version 1.9.0 > - > > Key: HIVE-21737 > URL: https://issues.apache.org/jira/browse/HIVE-21737 > Project: Hive > Issue Type: Improvement > Components: Hive >Reporter: Ismaël Mejía >Assignee: Fokko Driesprong >Priority: Minor > > Avro 1.9.0 was released recently. It brings a lot of fixes including a leaner > version of Avro without Jackson in the public API. Worth the update. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21763) Incremental replication to allow changing include/exclude tables list in replication policy.
[ https://issues.apache.org/jira/browse/HIVE-21763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864844#comment-16864844 ] Hive QA commented on HIVE-21763: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 46s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 31s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 27s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 21s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 42s{color} | {color:blue} standalone-metastore/metastore-common in master has 31 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 16s{color} | {color:blue} ql in master has 2260 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 45s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 29s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 29s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 44s{color} | {color:red} ql: The patch generated 6 new + 121 unchanged - 2 fixed = 127 total (was 123) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 4m 22s{color} | {color:red} ql generated 9 new + 2251 unchanged - 9 fixed = 2260 total (was 2260) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 42m 8s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:ql | | | Should org.apache.hadoop.hive.ql.parse.HiveParser$DFA238 be a _static_ inner class? At HiveParser.java:inner class? At HiveParser.java:[lines 48389-48402] | | | Dead store to LA33_128 in org.apache.hadoop.hive.ql.parse.HiveParser$DFA33.specialStateTransition(int, IntStream) At HiveParser.java:org.apache.hadoop.hive.ql.parse.HiveParser$DFA33.specialStateTransition(int, IntStream) At HiveParser.java:[line 48129] | | | Dead store to LA33_130 in org.apache.hadoop.hive.ql.parse.HiveParser$DFA33.specialStateTransition(int, IntStream) At HiveParser.java:org.apache.hadoop.hive.ql.parse.HiveParser$DFA33.specialStateTransition(int, IntStream) At HiveParser.java:[line 48142] | | | Dead store to LA33_132 in org.apache.hadoop.hive.ql.parse.HiveParser$DFA33.specialStateTransition(int, IntStream) At HiveParser.java:org.apache.hadoop.hive.ql.parse.HiveParser$DFA33.specialStateTransition(int, IntStream) At HiveParser.java:[line 48155] | | | Dead store to LA33_134 in org.apache.hadoop.hive.ql.parse.HiveParser$DFA33.specialStateTransition(int, IntStream) At HiveParser.java:org.apache.hadoop.hive.ql.parse.HiveParser$DFA33.specialStateTransition(int, IntStream) At HiveParser.java:[line 48168] | | | Dead store to LA33_136 in org.apache.hadoop.hive.ql.parse.HiveParser$DFA33.specialStateTransition(int, IntStream) At HiveParser.java:org.apache.hadoop.hive.ql.parse.HiveParser$DFA33.specialStateTransition(int, IntStream) At HiveParser.java:[line 48181] | | | Dead store to LA33_138 in
[jira] [Updated] (HIVE-21814) Implement list partitions related methods on temporary tables
[ https://issues.apache.org/jira/browse/HIVE-21814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laszlo Pinter updated HIVE-21814: - Attachment: HIVE-21814.02.patch > Implement list partitions related methods on temporary tables > - > > Key: HIVE-21814 > URL: https://issues.apache.org/jira/browse/HIVE-21814 > Project: Hive > Issue Type: Sub-task >Reporter: Laszlo Pinter >Assignee: Laszlo Pinter >Priority: Major > Attachments: HIVE-21814.01.patch, HIVE-21814.02.patch > > > IMetaStoreClient exposes the following methods related to listing partitions: > {code:java} > List listPartitionNames(String db_name, String tbl_name, List > part_vals, short max_parts); > List listPartitionNames(String catName, String db_name, String > tbl_name, List part_vals, int max_parts); > List listPartitions(String db_name, String tbl_name, List > part_vals, short max_parts); > List listPartitions(String catName, String db_name, String > tbl_name, List part_vals, int max_parts); > List listPartitionNames(String db_name, String tbl_name, short > max_parts); > List listPartitionNames(String catName, String db_name, String > tbl_name, int max_parts); > List listPartitionNames(String db_name, String tbl_name, List > part_vals, short max_parts); > List listPartitionNames(String catName, String db_name, String > tbl_name, List part_vals, int max_parts); > PartitionSpecProxy listPartitionSpecs(String dbName, String tableName, int > maxParts); > PartitionSpecProxy listPartitionSpecs(String catName, String dbName, String > tableName,int maxParts); > List listPartitionsWithAuthInfo(String dbName, String tableName, > List partialPvals, short maxParts, String userName, List > groupNames); > List listPartitionsWithAuthInfo(String catName, String dbName, > String tableName, List partialPvals, int maxParts, String userName, > List groupNames); > {code} > In order to support partitions on temporary tables, the majority of these > methods must be implemented in SessionHiveMetastoreClient. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21762) REPL DUMP to support new format for replication policy input to take included tables list.
[ https://issues.apache.org/jira/browse/HIVE-21762?focusedWorklogId=260928=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-260928 ] ASF GitHub Bot logged work on HIVE-21762: - Author: ASF GitHub Bot Created on: 15/Jun/19 18:24 Start Date: 15/Jun/19 18:24 Worklog Time Spent: 10m Work Description: sankarh commented on pull request #664: HIVE-21762: REPL DUMP to support new format for replication policy input to take included tables list. URL: https://github.com/apache/hive/pull/664 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 260928) Time Spent: 6h (was: 5h 50m) > REPL DUMP to support new format for replication policy input to take included > tables list. > -- > > Key: HIVE-21762 > URL: https://issues.apache.org/jira/browse/HIVE-21762 > Project: Hive > Issue Type: Sub-task > Components: repl >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, Replication, pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-21762.01.patch, HIVE-21762.02.patch, > HIVE-21762.03.patch, HIVE-21762.04.patch, HIVE-21762.05.patch, > HIVE-21762.06.patch, HIVE-21762.07.patch > > Time Spent: 6h > Remaining Estimate: 0h > > - REPL DUMP syntax: > {code} > REPL DUMP [FROM WITH ; > {code} > - New format for the Replication policy have 3 parts all separated with Dot > (.). > 1. First part is DB name. > 2. Second part is included list. Comma separated table names/regex with in > square brackets[]. If square brackets are not there, then it is treated as > single table replication which skips DB level events. > 3. Third part is excluded list. Comma separated table names/regex with in > square brackets[]. > {code} > -- Full DB replication which is currently supported > .['.*?'] -- Full DB replication > .[] -- Replicate just functions and not include any tables. > .['t1', 't2'] -- DB replication with static list of tables t1 and > t2 included. > .['t1*', 't2', '*t3'].['t100', '5t3', 't4'] -- DB replication with > all tables having prefix t1, with suffix t3 and include table t2 and exclude > t100 which has the prefix t1, 5t3 which suffix t3 and t4. > {code} > - Need to support regular expression of any format. > - A table is included in dump only if it matches the regular expressions in > included list and doesn't match the excluded list. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21763) Incremental replication to allow changing include/exclude tables list in replication policy.
[ https://issues.apache.org/jira/browse/HIVE-21763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864825#comment-16864825 ] Sankar Hariappan commented on HIVE-21763: - 01.patch has implemented 1. REPL DUMP changes to take REPLACE clause and trigger bootstrap of newly included tables as per new replication policy. 2. REPL LOAD changes to detect tables that are excluded in new replication policy and drop them. 3. Dump/read new replication policy in _dumpmetadata file. > Incremental replication to allow changing include/exclude tables list in > replication policy. > > > Key: HIVE-21763 > URL: https://issues.apache.org/jira/browse/HIVE-21763 > Project: Hive > Issue Type: Sub-task > Components: repl >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, Replication, pull-request-available > Attachments: HIVE-21763.01.patch > > Time Spent: 10m > Remaining Estimate: 0h > > - REPL DUMP takes 2 inputs along with existing FROM and WITH clause. > {code} > - REPL DUMP [REPLACE FROM > WITH ; > - current_repl_policy and previous_repl_policy can be any format mentioned in > Point-4. > - REPLACE clause to be supported to take previous repl policy as input. If > REPLACE clause is not there, then the policy remains unchanged. > - Rest of the format remains same. > {code} > - Now, REPL DUMP on this DB will replicate the tables based on > current_repl_policy. > - Single table replication of format .t1 doesn’t allow changing the > policy dynamically. So REPLACE clause is not allowed if previous_repl_policy > of this format. > - If any table is added dynamically either due to change in regular > expression or added to include list should be bootstrapped using independant > table level replication policy. > {code} > - Hive will automatically figure out the list of tables newly included in the > list by comparing the current_repl_policy & previous_repl_policy inputs and > combine bootstrap dump for added tables as part of incremental dump. > "_bootstrap" directory can be created in dump dir to accommodate all tables > to be bootstrapped. > - If any table is renamed, then it may gets dynamically added/removed for > replication based on defined replication policy + include/exclude list. So, > Hive will perform bootstrap for the table which is just included after rename. > {code} > - REPL LOAD should check for changes in repl policy and drop the tables/views > excluded in the new policy compared to previous policy. It should be done > before performing incremental and bootstrap load from the current dump. > - REPL LOAD on incremental dump should load events directories first and then > check for "_bootstrap" directory and perform bootstrap load on them. > Rename table is not in scope of this jira. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21763) Incremental replication to allow changing include/exclude tables list in replication policy.
[ https://issues.apache.org/jira/browse/HIVE-21763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-21763: Status: Patch Available (was: Open) > Incremental replication to allow changing include/exclude tables list in > replication policy. > > > Key: HIVE-21763 > URL: https://issues.apache.org/jira/browse/HIVE-21763 > Project: Hive > Issue Type: Sub-task > Components: repl >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, Replication, pull-request-available > Attachments: HIVE-21763.01.patch > > Time Spent: 10m > Remaining Estimate: 0h > > - REPL DUMP takes 2 inputs along with existing FROM and WITH clause. > {code} > - REPL DUMP [REPLACE FROM > WITH ; > - current_repl_policy and previous_repl_policy can be any format mentioned in > Point-4. > - REPLACE clause to be supported to take previous repl policy as input. If > REPLACE clause is not there, then the policy remains unchanged. > - Rest of the format remains same. > {code} > - Now, REPL DUMP on this DB will replicate the tables based on > current_repl_policy. > - Single table replication of format .t1 doesn’t allow changing the > policy dynamically. So REPLACE clause is not allowed if previous_repl_policy > of this format. > - If any table is added dynamically either due to change in regular > expression or added to include list should be bootstrapped using independant > table level replication policy. > {code} > - Hive will automatically figure out the list of tables newly included in the > list by comparing the current_repl_policy & previous_repl_policy inputs and > combine bootstrap dump for added tables as part of incremental dump. > "_bootstrap" directory can be created in dump dir to accommodate all tables > to be bootstrapped. > - If any table is renamed, then it may gets dynamically added/removed for > replication based on defined replication policy + include/exclude list. So, > Hive will perform bootstrap for the table which is just included after rename. > {code} > - REPL LOAD should check for changes in repl policy and drop the tables/views > excluded in the new policy compared to previous policy. It should be done > before performing incremental and bootstrap load from the current dump. > - REPL LOAD on incremental dump should load events directories first and then > check for "_bootstrap" directory and perform bootstrap load on them. > Rename table is not in scope of this jira. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21763) Incremental replication to allow changing include/exclude tables list in replication policy.
[ https://issues.apache.org/jira/browse/HIVE-21763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-21763: Attachment: HIVE-21763.01.patch > Incremental replication to allow changing include/exclude tables list in > replication policy. > > > Key: HIVE-21763 > URL: https://issues.apache.org/jira/browse/HIVE-21763 > Project: Hive > Issue Type: Sub-task > Components: repl >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, Replication, pull-request-available > Attachments: HIVE-21763.01.patch > > Time Spent: 10m > Remaining Estimate: 0h > > - REPL DUMP takes 2 inputs along with existing FROM and WITH clause. > {code} > - REPL DUMP [REPLACE FROM > WITH ; > - current_repl_policy and previous_repl_policy can be any format mentioned in > Point-4. > - REPLACE clause to be supported to take previous repl policy as input. If > REPLACE clause is not there, then the policy remains unchanged. > - Rest of the format remains same. > {code} > - Now, REPL DUMP on this DB will replicate the tables based on > current_repl_policy. > - Single table replication of format .t1 doesn’t allow changing the > policy dynamically. So REPLACE clause is not allowed if previous_repl_policy > of this format. > - If any table is added dynamically either due to change in regular > expression or added to include list should be bootstrapped using independant > table level replication policy. > {code} > - Hive will automatically figure out the list of tables newly included in the > list by comparing the current_repl_policy & previous_repl_policy inputs and > combine bootstrap dump for added tables as part of incremental dump. > "_bootstrap" directory can be created in dump dir to accommodate all tables > to be bootstrapped. > - If any table is renamed, then it may gets dynamically added/removed for > replication based on defined replication policy + include/exclude list. So, > Hive will perform bootstrap for the table which is just included after rename. > {code} > - REPL LOAD should check for changes in repl policy and drop the tables/views > excluded in the new policy compared to previous policy. It should be done > before performing incremental and bootstrap load from the current dump. > - REPL LOAD on incremental dump should load events directories first and then > check for "_bootstrap" directory and perform bootstrap load on them. > Rename table is not in scope of this jira. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21763) Incremental replication to allow changing include/exclude tables list in replication policy.
[ https://issues.apache.org/jira/browse/HIVE-21763?focusedWorklogId=260927=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-260927 ] ASF GitHub Bot logged work on HIVE-21763: - Author: ASF GitHub Bot Created on: 15/Jun/19 18:21 Start Date: 15/Jun/19 18:21 Worklog Time Spent: 10m Work Description: sankarh commented on pull request #673: HIVE-21763: Incremental replication to allow changing include/exclude tables list in replication policy. URL: https://github.com/apache/hive/pull/673 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 260927) Time Spent: 10m Remaining Estimate: 0h > Incremental replication to allow changing include/exclude tables list in > replication policy. > > > Key: HIVE-21763 > URL: https://issues.apache.org/jira/browse/HIVE-21763 > Project: Hive > Issue Type: Sub-task > Components: repl >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, Replication, pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > - REPL DUMP takes 2 inputs along with existing FROM and WITH clause. > {code} > - REPL DUMP [REPLACE FROM > WITH ; > - current_repl_policy and previous_repl_policy can be any format mentioned in > Point-4. > - REPLACE clause to be supported to take previous repl policy as input. If > REPLACE clause is not there, then the policy remains unchanged. > - Rest of the format remains same. > {code} > - Now, REPL DUMP on this DB will replicate the tables based on > current_repl_policy. > - Single table replication of format .t1 doesn’t allow changing the > policy dynamically. So REPLACE clause is not allowed if previous_repl_policy > of this format. > - If any table is added dynamically either due to change in regular > expression or added to include list should be bootstrapped using independant > table level replication policy. > {code} > - Hive will automatically figure out the list of tables newly included in the > list by comparing the current_repl_policy & previous_repl_policy inputs and > combine bootstrap dump for added tables as part of incremental dump. > "_bootstrap" directory can be created in dump dir to accommodate all tables > to be bootstrapped. > - If any table is renamed, then it may gets dynamically added/removed for > replication based on defined replication policy + include/exclude list. So, > Hive will perform bootstrap for the table which is just included after rename. > {code} > - REPL LOAD should check for changes in repl policy and drop the tables/views > excluded in the new policy compared to previous policy. It should be done > before performing incremental and bootstrap load from the current dump. > - REPL LOAD on incremental dump should load events directories first and then > check for "_bootstrap" directory and perform bootstrap load on them. > Rename table is not in scope of this jira. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21763) Incremental replication to allow changing include/exclude tables list in replication policy.
[ https://issues.apache.org/jira/browse/HIVE-21763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-21763: -- Labels: DR Replication pull-request-available (was: DR Replication) > Incremental replication to allow changing include/exclude tables list in > replication policy. > > > Key: HIVE-21763 > URL: https://issues.apache.org/jira/browse/HIVE-21763 > Project: Hive > Issue Type: Sub-task > Components: repl >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, Replication, pull-request-available > > - REPL DUMP takes 2 inputs along with existing FROM and WITH clause. > {code} > - REPL DUMP [REPLACE FROM > WITH ; > - current_repl_policy and previous_repl_policy can be any format mentioned in > Point-4. > - REPLACE clause to be supported to take previous repl policy as input. If > REPLACE clause is not there, then the policy remains unchanged. > - Rest of the format remains same. > {code} > - Now, REPL DUMP on this DB will replicate the tables based on > current_repl_policy. > - Single table replication of format .t1 doesn’t allow changing the > policy dynamically. So REPLACE clause is not allowed if previous_repl_policy > of this format. > - If any table is added dynamically either due to change in regular > expression or added to include list should be bootstrapped using independant > table level replication policy. > {code} > - Hive will automatically figure out the list of tables newly included in the > list by comparing the current_repl_policy & previous_repl_policy inputs and > combine bootstrap dump for added tables as part of incremental dump. > "_bootstrap" directory can be created in dump dir to accommodate all tables > to be bootstrapped. > - If any table is renamed, then it may gets dynamically added/removed for > replication based on defined replication policy + include/exclude list. So, > Hive will perform bootstrap for the table which is just included after rename. > {code} > - REPL LOAD should check for changes in repl policy and drop the tables/views > excluded in the new policy compared to previous policy. It should be done > before performing incremental and bootstrap load from the current dump. > - REPL LOAD on incremental dump should load events directories first and then > check for "_bootstrap" directory and perform bootstrap load on them. > Rename table is not in scope of this jira. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-21763) Incremental replication to allow changing include/exclude tables list in replication policy.
[ https://issues.apache.org/jira/browse/HIVE-21763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan reassigned HIVE-21763: --- Assignee: Sankar Hariappan (was: mahesh kumar behera) > Incremental replication to allow changing include/exclude tables list in > replication policy. > > > Key: HIVE-21763 > URL: https://issues.apache.org/jira/browse/HIVE-21763 > Project: Hive > Issue Type: Sub-task > Components: repl >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, Replication > > - REPL DUMP takes 2 inputs along with existing FROM and WITH clause. > {code} > - REPL DUMP [REPLACE FROM > WITH ; > - current_repl_policy and previous_repl_policy can be any format mentioned in > Point-4. > - REPLACE clause to be supported to take previous repl policy as input. If > REPLACE clause is not there, then the policy remains unchanged. > - Rest of the format remains same. > {code} > - Now, REPL DUMP on this DB will replicate the tables based on > current_repl_policy. > - Single table replication of format .t1 doesn’t allow changing the > policy dynamically. So REPLACE clause is not allowed if previous_repl_policy > of this format. > - If any table is added dynamically either due to change in regular > expression or added to include list should be bootstrapped using independant > table level replication policy. > {code} > - Hive will automatically figure out the list of tables newly included in the > list by comparing the current_repl_policy & previous_repl_policy inputs and > combine bootstrap dump for added tables as part of incremental dump. > "_bootstrap" directory can be created in dump dir to accommodate all tables > to be bootstrapped. > - If any table is renamed, then it may gets dynamically added/removed for > replication based on defined replication policy + include/exclude list. So, > Hive will perform bootstrap for the table which is just included after rename. > {code} > - REPL LOAD should check for changes in repl policy and drop the tables/views > excluded in the new policy compared to previous policy. It should be done > before performing incremental and bootstrap load from the current dump. > - REPL LOAD on incremental dump should load events directories first and then > check for "_bootstrap" directory and perform bootstrap load on them. > Rename table is not in scope of this jira. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21761) Support table level replication in Hive
[ https://issues.apache.org/jira/browse/HIVE-21761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-21761: Description: *Requirements:* {code} - User needs to define replication policy to replicate any specific table. This enables user to replicate only the business critical tables instead of replicating all tables which may throttle the network bandwidth, storage and also slow-down Hive replication. - User needs to define replication policy using regular expressions (such as db.sales_*) and needs to include additional tables which are non-matching given pattern and exclude some tables which are matching given pattern. - User needs to dynamically add/remove tables to the list either by manually changing the replication policy during run time. {code} *Design:* {code} 1. Hive continue to support DB level replication policy of format .* but logically, we support the policy as .(t1, t3, …). 2. Regular expression can also be supported as replication policy. For example, a. .[], b. .[<*suffix>], c. .[]. 3. If regular expression is provided as replication policy, then Hive also accepts include and exclude lists as input which also helps to dynamically add/remove tables for replication. a. Exclude list specifies the tables to be excluded even if it satisfies the regular expression. b. Include list specifies the tables to be included in addition to the tables satisfying the regular expression. 4. New format for the Replication policy have 3 parts all separated with Dot (.). a. First part is DB name. b. Second part is included list. Comma separated table names/regex with in square brackets[]. If square brackets are not there, then it is treated as single table replication which skips DB level events. c. Third part is excluded list. Comma separated table names/regex with in square brackets[]. - -- Full DB replication which is currently supported - .['.*?'] -- Full DB replication - .[] -- Replicate just functions and not include any tables. - .['t1', 't3'] -- DB replication with static list of tables t1 and t3 included. - .['t1*', 't2'].['t100'] -- DB replication with all tables having prefix t1 and also include table t2 which doesn’t have prefix t1 and exclude t100 which has the prefix t1. 5. If the DB property “repl.source.for” is set, then by default all the tables in the DB will be enabled for replication and will continue to archive deleted data to CM path. 6. REPL DUMP takes 2 inputs along with existing FROM and WITH clause. a. REPL DUMP [REPLACE FROM WITH ; current_repl_policy and previous_repl_policy can be any format mentioned in Point-4. b. REPLACE clause to be supported to take previous repl policy as input. c. Rest of the format remains same. 7. Now, REPL DUMP on this DB will replicate the tables based on current_repl_policy. 8. If any table is added dynamically either due to change in regular expression or added to include list should be bootstrapped. a. Hive will automatically figure out the list of tables newly included in the list by comparing the current_repl_policy & previous_repl_policy inputs and combine bootstrap dump for added tables as part of incremental dump. As we can combine first incremental with bootstrap dump, it removes the current limitation of target DB being inconsistent after bootstrap unless we run first incremental replication. b. If any table is renamed, then it may gets dynamically added/removed for replication based on defined replication policy + include/exclude list. So, Hive will perform bootstrap for the table which is just included after rename. c. Also, if renamed table is excluded from replication policy, then need to drop the old table at target as well. 9. Only the initial bootstrap load expects the target DB to be empty but the intermediate bootstrap on tables due to regex or inclusion/exclusion list change or renames doesn’t expect the target DB or table to be empty. If any table with same name exist during such bootstrap, the table will be overwritten including data. {code} was: *Requirements:* {code} - User needs to define replication policy to replicate any specific table. This enables user to replicate only the business critical tables instead of replicating all tables which may throttle the network bandwidth, storage and also slow-down Hive replication. - User needs to define replication policy using regular expressions (such as db.sales_*) and needs to include additional tables which are non-matching given pattern and exclude some tables which are matching given pattern. - User needs to dynamically add/remove tables to the list either by manually changing the replication policy during run time. {code} *Design:* {code} 1. Hive continue to support DB level replication policy of format .* but logically, we support the policy as .(t1, t3, …). 2.
[jira] [Updated] (HIVE-21763) Incremental replication to allow changing include/exclude tables list in replication policy.
[ https://issues.apache.org/jira/browse/HIVE-21763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-21763: Description: - REPL DUMP takes 2 inputs along with existing FROM and WITH clause. {code} - REPL DUMP [REPLACE FROM WITH ; - current_repl_policy and previous_repl_policy can be any format mentioned in Point-4. - REPLACE clause to be supported to take previous repl policy as input. If REPLACE clause is not there, then the policy remains unchanged. - Rest of the format remains same. {code} - Now, REPL DUMP on this DB will replicate the tables based on current_repl_policy. - Single table replication of format .t1 doesn’t allow changing the policy dynamically. So REPLACE clause is not allowed if previous_repl_policy of this format. - If any table is added dynamically either due to change in regular expression or added to include list should be bootstrapped using independant table level replication policy. {code} - Hive will automatically figure out the list of tables newly included in the list by comparing the current_repl_policy & previous_repl_policy inputs and combine bootstrap dump for added tables as part of incremental dump. "_bootstrap" directory can be created in dump dir to accommodate all tables to be bootstrapped. - If any table is renamed, then it may gets dynamically added/removed for replication based on defined replication policy + include/exclude list. So, Hive will perform bootstrap for the table which is just included after rename. {code} - REPL LOAD should check for changes in repl policy and drop the tables/views excluded in the new policy compared to previous policy. It should be done before performing incremental and bootstrap load from the current dump. - REPL LOAD on incremental dump should load events directories first and then check for "_bootstrap" directory and perform bootstrap load on them. Rename table is not in scope of this jira. was: - REPL DUMP takes 2 inputs along with existing FROM and WITH clause. {code} - REPL DUMP [REPLACE FROM WITH ; - current_repl_policy and previous_repl_policy can be any format mentioned in Point-4. - REPLACE clause to be supported to take previous repl policy as input. If REPLACE clause is not there, then the policy remains unchanged. - Rest of the format remains same. {code} - Now, REPL DUMP on this DB will replicate the tables based on current_repl_policy. - Single table replication of format .t1 doesn’t allow changing the policy dynamically. So REPLACE clause is not allowed if previous_repl_policy of this format. - If any table is added dynamically either due to change in regular expression or added to include list should be bootstrapped using independant table level replication policy. {code} - Hive will automatically figure out the list of tables newly included in the list by comparing the current_repl_policy & previous_repl_policy inputs and combine bootstrap dump for added tables as part of incremental dump. "_bootstrap" directory can be created in dump dir to accommodate all tables to be bootstrapped. - If any table is renamed, then it may gets dynamically added/removed for replication based on defined replication policy + include/exclude list. So, Hive will perform bootstrap for the table which is just included after rename. {code} - REPL LOAD should check for changes in repl policy and drop the tables/views excluded in the new policy compared to previous policy. It should be done before performing incremental and bootstrap load from the current dump. - REPL LOAD on incremental dump should load events directories first and then check for "_bootstrap" directory and perform bootstrap load on them, > Incremental replication to allow changing include/exclude tables list in > replication policy. > > > Key: HIVE-21763 > URL: https://issues.apache.org/jira/browse/HIVE-21763 > Project: Hive > Issue Type: Sub-task > Components: repl >Reporter: Sankar Hariappan >Assignee: mahesh kumar behera >Priority: Major > Labels: DR, Replication > > - REPL DUMP takes 2 inputs along with existing FROM and WITH clause. > {code} > - REPL DUMP [REPLACE FROM > WITH ; > - current_repl_policy and previous_repl_policy can be any format mentioned in > Point-4. > - REPLACE clause to be supported to take previous repl policy as input. If > REPLACE clause is not there, then the policy remains unchanged. > - Rest of the format remains same. > {code} > - Now, REPL DUMP on this DB will replicate the tables based on > current_repl_policy. > - Single table replication of format .t1 doesn’t allow changing the > policy dynamically. So REPLACE clause is not allowed if previous_repl_policy > of this format. > - If any
[jira] [Updated] (HIVE-21763) Incremental replication to allow changing include/exclude tables list in replication policy.
[ https://issues.apache.org/jira/browse/HIVE-21763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-21763: Description: - REPL DUMP takes 2 inputs along with existing FROM and WITH clause. {code} - REPL DUMP [REPLACE FROM WITH ; - current_repl_policy and previous_repl_policy can be any format mentioned in Point-4. - REPLACE clause to be supported to take previous repl policy as input. If REPLACE clause is not there, then the policy remains unchanged. - Rest of the format remains same. {code} - Now, REPL DUMP on this DB will replicate the tables based on current_repl_policy. - Single table replication of format .t1 doesn’t allow changing the policy dynamically. So REPLACE clause is not allowed if previous_repl_policy of this format. - If any table is added dynamically either due to change in regular expression or added to include list should be bootstrapped using independant table level replication policy. {code} - Hive will automatically figure out the list of tables newly included in the list by comparing the current_repl_policy & previous_repl_policy inputs and combine bootstrap dump for added tables as part of incremental dump. "_bootstrap" directory can be created in dump dir to accommodate all tables to be bootstrapped. - If any table is renamed, then it may gets dynamically added/removed for replication based on defined replication policy + include/exclude list. So, Hive will perform bootstrap for the table which is just included after rename. {code} - REPL LOAD should check for changes in repl policy and drop the tables/views excluded in the new policy compared to previous policy. It should be done before performing incremental and bootstrap load from the current dump. - REPL LOAD on incremental dump should load events directories first and then check for "_bootstrap" directory and perform bootstrap load on them, was: - REPL DUMP takes 2 inputs along with existing FROM and WITH clause. {code} - REPL DUMP [REPLACE FROM WITH ; - current_repl_policy and previous_repl_policy can be any format mentioned in Point-4. - REPLACE clause to be supported to take previous repl policy as input. If REPLACE clause is not there, then the policy remains unchanged. - Rest of the format remains same. {code} - Now, REPL DUMP on this DB will replicate the tables based on current_repl_policy. - Single table replication of format .t1 doesn’t allow changing the policy dynamically. So REPLACE clause is not allowed if previous_repl_policy of this format. - If any table is added dynamically either due to change in regular expression or added to include list should be bootstrapped using independant table level replication policy. {code} - Hive will automatically figure out the list of tables newly included in the list by comparing the current_repl_policy & previous_repl_policy inputs and combine bootstrap dump for added tables as part of incremental dump. "_bootstrap" directory can be created in dump dir to accommodate all tables to be bootstrapped. - If any table is renamed, then it may gets dynamically added/removed for replication based on defined replication policy + include/exclude list. So, Hive will perform bootstrap for the table which is just included after rename. {code} - REPL LOAD on incremental dump should load events directories first and then check for "_bootstrap" directory and perform bootstrap load on them, - REPL LOAD should check for changes in repl policy and drop the tables/views excluded in the new policy compared to previous policy. > Incremental replication to allow changing include/exclude tables list in > replication policy. > > > Key: HIVE-21763 > URL: https://issues.apache.org/jira/browse/HIVE-21763 > Project: Hive > Issue Type: Sub-task > Components: repl >Reporter: Sankar Hariappan >Assignee: mahesh kumar behera >Priority: Major > Labels: DR, Replication > > - REPL DUMP takes 2 inputs along with existing FROM and WITH clause. > {code} > - REPL DUMP [REPLACE FROM > WITH ; > - current_repl_policy and previous_repl_policy can be any format mentioned in > Point-4. > - REPLACE clause to be supported to take previous repl policy as input. If > REPLACE clause is not there, then the policy remains unchanged. > - Rest of the format remains same. > {code} > - Now, REPL DUMP on this DB will replicate the tables based on > current_repl_policy. > - Single table replication of format .t1 doesn’t allow changing the > policy dynamically. So REPLACE clause is not allowed if previous_repl_policy > of this format. > - If any table is added dynamically either due to change in regular > expression or added to include list should be bootstrapped using independant
[jira] [Updated] (HIVE-21763) Incremental replication to allow changing include/exclude tables list in replication policy.
[ https://issues.apache.org/jira/browse/HIVE-21763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-21763: Description: - REPL DUMP takes 2 inputs along with existing FROM and WITH clause. {code} - REPL DUMP [REPLACE FROM WITH ; - current_repl_policy and previous_repl_policy can be any format mentioned in Point-4. - REPLACE clause to be supported to take previous repl policy as input. If REPLACE clause is not there, then the policy remains unchanged. - Rest of the format remains same. {code} - Now, REPL DUMP on this DB will replicate the tables based on current_repl_policy. - Single table replication of format .t1 doesn’t allow changing the policy dynamically. So REPLACE clause is not allowed if previous_repl_policy of this format. - If any table is added dynamically either due to change in regular expression or added to include list should be bootstrapped using independant table level replication policy. {code} - Hive will automatically figure out the list of tables newly included in the list by comparing the current_repl_policy & previous_repl_policy inputs and combine bootstrap dump for added tables as part of incremental dump. "_bootstrap" directory can be created in dump dir to accommodate all tables to be bootstrapped. - If any table is renamed, then it may gets dynamically added/removed for replication based on defined replication policy + include/exclude list. So, Hive will perform bootstrap for the table which is just included after rename. {code} - REPL LOAD on incremental dump should load events directories first and then check for "_bootstrap" directory and perform bootstrap load on them, - REPL LOAD should check for changes in repl policy and drop the tables/views excluded in the new policy compared to previous policy. was: - REPL DUMP takes 2 inputs along with existing FROM and WITH clause. {code} - REPL DUMP [REPLACE FROM WITH ; - current_repl_policy and previous_repl_policy can be any format mentioned in Point-4. - REPLACE clause to be supported to take previous repl policy as input. If REPLACE clause is not there, then the policy remains unchanged. - Rest of the format remains same. {code} - Now, REPL DUMP on this DB will replicate the tables based on current_repl_policy. - Single table replication of format .t1 doesn’t allow changing the policy dynamically. So REPLACE clause is not allowed if previous_repl_policy of this format. - If any table is added dynamically either due to change in regular expression or added to include list should be bootstrapped using independant table level replication policy. {code} - Hive will automatically figure out the list of tables newly included in the list by comparing the current_repl_policy & previous_repl_policy inputs and combine bootstrap dump for added tables as part of incremental dump. "_bootstrap" directory can be created in dump dir to accommodate all tables to be bootstrapped. - If any table is renamed, then it may gets dynamically added/removed for replication based on defined replication policy + include/exclude list. So, Hive will perform bootstrap for the table which is just included after rename. {code} - REPL LOAD on incremental dump should check for "_bootstrap" directory and perform bootstrap load on them first and then continue with incremental load based on events directories. - REPL LOAD should check for changes in repl policy and drop the tables/views excluded in the new policy compared to previous policy. > Incremental replication to allow changing include/exclude tables list in > replication policy. > > > Key: HIVE-21763 > URL: https://issues.apache.org/jira/browse/HIVE-21763 > Project: Hive > Issue Type: Sub-task > Components: repl >Reporter: Sankar Hariappan >Assignee: mahesh kumar behera >Priority: Major > Labels: DR, Replication > > - REPL DUMP takes 2 inputs along with existing FROM and WITH clause. > {code} > - REPL DUMP [REPLACE FROM > WITH ; > - current_repl_policy and previous_repl_policy can be any format mentioned in > Point-4. > - REPLACE clause to be supported to take previous repl policy as input. If > REPLACE clause is not there, then the policy remains unchanged. > - Rest of the format remains same. > {code} > - Now, REPL DUMP on this DB will replicate the tables based on > current_repl_policy. > - Single table replication of format .t1 doesn’t allow changing the > policy dynamically. So REPLACE clause is not allowed if previous_repl_policy > of this format. > - If any table is added dynamically either due to change in regular > expression or added to include list should be bootstrapped using independant > table level replication policy. > {code} > - Hive
[jira] [Commented] (HIVE-21830) Break up DDLTask - extract rest of the Alter Table operations
[ https://issues.apache.org/jira/browse/HIVE-21830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864661#comment-16864661 ] Hive QA commented on HIVE-21830: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12971871/HIVE-21830.09.patch {color:green}SUCCESS:{color} +1 due to 7 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 16156 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17590/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17590/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17590/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12971871 - PreCommit-HIVE-Build > Break up DDLTask - extract rest of the Alter Table operations > - > > Key: HIVE-21830 > URL: https://issues.apache.org/jira/browse/HIVE-21830 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: refactor-ddl > Fix For: 4.0.0 > > Attachments: HIVE-21830.01.patch, HIVE-21830.02.patch, > HIVE-21830.03.patch, HIVE-21830.04.patch, HIVE-21830.05.patch, > HIVE-21830.06.patch, HIVE-21830.07.patch, HIVE-21830.08.patch, > HIVE-21830.09.patch > > > DDLTask is a huge class, more than 5000 lines long. The related DDLWork is > also a huge class, which has a field for each DDL operation it supports. The > goal is to refactor these in order to have everything cut into more > handleable classes under the package org.apache.hadoop.hive.ql.exec.ddl: > * have a separate class for each operation > * have a package for each operation group (database ddl, table ddl, etc), so > the amount of classes under a package is more manageable > * make all the requests (DDLDesc subclasses) immutable > * DDLTask should be agnostic to the actual operations > * right now let's ignore the issue of having some operations handled by > DDLTask which are not actual DDL operations (lock, unlock, desc...) > In the interim time when there are two DDLTask and DDLWork classes in the > code base the new ones in the new package are called DDLTask2 and DDLWork2 > thus avoiding the usage of fully qualified class names where both the old and > the new classes are in use. > Step #10: extract the alter table operations that left from the old DDLTask, > and move them under the new packages. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21830) Break up DDLTask - extract rest of the Alter Table operations
[ https://issues.apache.org/jira/browse/HIVE-21830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864658#comment-16864658 ] Hive QA commented on HIVE-21830: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 39s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 23s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 16s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 2s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 43s{color} | {color:blue} standalone-metastore/metastore-common in master has 31 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 15s{color} | {color:blue} standalone-metastore/metastore-server in master has 184 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 21s{color} | {color:blue} ql in master has 2260 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 31s{color} | {color:blue} accumulo-handler in master has 21 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 29s{color} | {color:blue} druid-handler in master has 3 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 33s{color} | {color:blue} hbase-handler in master has 15 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 5s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 24s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 53s{color} | {color:red} ql: The patch generated 5 new + 772 unchanged - 59 fixed = 777 total (was 831) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 54s{color} | {color:green} metastore-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 27s{color} | {color:green} metastore-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 37s{color} | {color:green} ql generated 0 new + 2255 unchanged - 5 fixed = 2255 total (was 2260) {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 40s{color} | {color:green} accumulo-handler in the patch passed. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 40s{color} | {color:green} druid-handler in the patch passed. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 42s{color} | {color:green} hbase-handler in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 7s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 53m 38s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Updated] (HIVE-21830) Break up DDLTask - extract rest of the Alter Table operations
[ https://issues.apache.org/jira/browse/HIVE-21830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely updated HIVE-21830: -- Attachment: (was: HIVE-21830.09.patch) > Break up DDLTask - extract rest of the Alter Table operations > - > > Key: HIVE-21830 > URL: https://issues.apache.org/jira/browse/HIVE-21830 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: refactor-ddl > Fix For: 4.0.0 > > Attachments: HIVE-21830.01.patch, HIVE-21830.02.patch, > HIVE-21830.03.patch, HIVE-21830.04.patch, HIVE-21830.05.patch, > HIVE-21830.06.patch, HIVE-21830.07.patch, HIVE-21830.08.patch, > HIVE-21830.09.patch > > > DDLTask is a huge class, more than 5000 lines long. The related DDLWork is > also a huge class, which has a field for each DDL operation it supports. The > goal is to refactor these in order to have everything cut into more > handleable classes under the package org.apache.hadoop.hive.ql.exec.ddl: > * have a separate class for each operation > * have a package for each operation group (database ddl, table ddl, etc), so > the amount of classes under a package is more manageable > * make all the requests (DDLDesc subclasses) immutable > * DDLTask should be agnostic to the actual operations > * right now let's ignore the issue of having some operations handled by > DDLTask which are not actual DDL operations (lock, unlock, desc...) > In the interim time when there are two DDLTask and DDLWork classes in the > code base the new ones in the new package are called DDLTask2 and DDLWork2 > thus avoiding the usage of fully qualified class names where both the old and > the new classes are in use. > Step #10: extract the alter table operations that left from the old DDLTask, > and move them under the new packages. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21830) Break up DDLTask - extract rest of the Alter Table operations
[ https://issues.apache.org/jira/browse/HIVE-21830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely updated HIVE-21830: -- Attachment: HIVE-21830.09.patch > Break up DDLTask - extract rest of the Alter Table operations > - > > Key: HIVE-21830 > URL: https://issues.apache.org/jira/browse/HIVE-21830 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: refactor-ddl > Fix For: 4.0.0 > > Attachments: HIVE-21830.01.patch, HIVE-21830.02.patch, > HIVE-21830.03.patch, HIVE-21830.04.patch, HIVE-21830.05.patch, > HIVE-21830.06.patch, HIVE-21830.07.patch, HIVE-21830.08.patch, > HIVE-21830.09.patch > > > DDLTask is a huge class, more than 5000 lines long. The related DDLWork is > also a huge class, which has a field for each DDL operation it supports. The > goal is to refactor these in order to have everything cut into more > handleable classes under the package org.apache.hadoop.hive.ql.exec.ddl: > * have a separate class for each operation > * have a package for each operation group (database ddl, table ddl, etc), so > the amount of classes under a package is more manageable > * make all the requests (DDLDesc subclasses) immutable > * DDLTask should be agnostic to the actual operations > * right now let's ignore the issue of having some operations handled by > DDLTask which are not actual DDL operations (lock, unlock, desc...) > In the interim time when there are two DDLTask and DDLWork classes in the > code base the new ones in the new package are called DDLTask2 and DDLWork2 > thus avoiding the usage of fully qualified class names where both the old and > the new classes are in use. > Step #10: extract the alter table operations that left from the old DDLTask, > and move them under the new packages. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21830) Break up DDLTask - extract rest of the Alter Table operations
[ https://issues.apache.org/jira/browse/HIVE-21830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely updated HIVE-21830: -- Status: Patch Available (was: Open) > Break up DDLTask - extract rest of the Alter Table operations > - > > Key: HIVE-21830 > URL: https://issues.apache.org/jira/browse/HIVE-21830 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: refactor-ddl > Fix For: 4.0.0 > > Attachments: HIVE-21830.01.patch, HIVE-21830.02.patch, > HIVE-21830.03.patch, HIVE-21830.04.patch, HIVE-21830.05.patch, > HIVE-21830.06.patch, HIVE-21830.07.patch, HIVE-21830.08.patch, > HIVE-21830.09.patch > > > DDLTask is a huge class, more than 5000 lines long. The related DDLWork is > also a huge class, which has a field for each DDL operation it supports. The > goal is to refactor these in order to have everything cut into more > handleable classes under the package org.apache.hadoop.hive.ql.exec.ddl: > * have a separate class for each operation > * have a package for each operation group (database ddl, table ddl, etc), so > the amount of classes under a package is more manageable > * make all the requests (DDLDesc subclasses) immutable > * DDLTask should be agnostic to the actual operations > * right now let's ignore the issue of having some operations handled by > DDLTask which are not actual DDL operations (lock, unlock, desc...) > In the interim time when there are two DDLTask and DDLWork classes in the > code base the new ones in the new package are called DDLTask2 and DDLWork2 > thus avoiding the usage of fully qualified class names where both the old and > the new classes are in use. > Step #10: extract the alter table operations that left from the old DDLTask, > and move them under the new packages. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21830) Break up DDLTask - extract rest of the Alter Table operations
[ https://issues.apache.org/jira/browse/HIVE-21830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely updated HIVE-21830: -- Status: Open (was: Patch Available) > Break up DDLTask - extract rest of the Alter Table operations > - > > Key: HIVE-21830 > URL: https://issues.apache.org/jira/browse/HIVE-21830 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: refactor-ddl > Fix For: 4.0.0 > > Attachments: HIVE-21830.01.patch, HIVE-21830.02.patch, > HIVE-21830.03.patch, HIVE-21830.04.patch, HIVE-21830.05.patch, > HIVE-21830.06.patch, HIVE-21830.07.patch, HIVE-21830.08.patch, > HIVE-21830.09.patch > > > DDLTask is a huge class, more than 5000 lines long. The related DDLWork is > also a huge class, which has a field for each DDL operation it supports. The > goal is to refactor these in order to have everything cut into more > handleable classes under the package org.apache.hadoop.hive.ql.exec.ddl: > * have a separate class for each operation > * have a package for each operation group (database ddl, table ddl, etc), so > the amount of classes under a package is more manageable > * make all the requests (DDLDesc subclasses) immutable > * DDLTask should be agnostic to the actual operations > * right now let's ignore the issue of having some operations handled by > DDLTask which are not actual DDL operations (lock, unlock, desc...) > In the interim time when there are two DDLTask and DDLWork classes in the > code base the new ones in the new package are called DDLTask2 and DDLWork2 > thus avoiding the usage of fully qualified class names where both the old and > the new classes are in use. > Step #10: extract the alter table operations that left from the old DDLTask, > and move them under the new packages. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21830) Break up DDLTask - extract rest of the Alter Table operations
[ https://issues.apache.org/jira/browse/HIVE-21830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864648#comment-16864648 ] Hive QA commented on HIVE-21830: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12971868/HIVE-21830.09.patch {color:green}SUCCESS:{color} +1 due to 7 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 16156 tests executed *Failed tests:* {noformat} org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testComplexQuery (batchId=273) org.apache.hive.jdbc.TestJdbcWithMiniLlapArrow.testKillQuery (batchId=273) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17589/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17589/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17589/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12971868 - PreCommit-HIVE-Build > Break up DDLTask - extract rest of the Alter Table operations > - > > Key: HIVE-21830 > URL: https://issues.apache.org/jira/browse/HIVE-21830 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: refactor-ddl > Fix For: 4.0.0 > > Attachments: HIVE-21830.01.patch, HIVE-21830.02.patch, > HIVE-21830.03.patch, HIVE-21830.04.patch, HIVE-21830.05.patch, > HIVE-21830.06.patch, HIVE-21830.07.patch, HIVE-21830.08.patch, > HIVE-21830.09.patch > > > DDLTask is a huge class, more than 5000 lines long. The related DDLWork is > also a huge class, which has a field for each DDL operation it supports. The > goal is to refactor these in order to have everything cut into more > handleable classes under the package org.apache.hadoop.hive.ql.exec.ddl: > * have a separate class for each operation > * have a package for each operation group (database ddl, table ddl, etc), so > the amount of classes under a package is more manageable > * make all the requests (DDLDesc subclasses) immutable > * DDLTask should be agnostic to the actual operations > * right now let's ignore the issue of having some operations handled by > DDLTask which are not actual DDL operations (lock, unlock, desc...) > In the interim time when there are two DDLTask and DDLWork classes in the > code base the new ones in the new package are called DDLTask2 and DDLWork2 > thus avoiding the usage of fully qualified class names where both the old and > the new classes are in use. > Step #10: extract the alter table operations that left from the old DDLTask, > and move them under the new packages. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21830) Break up DDLTask - extract rest of the Alter Table operations
[ https://issues.apache.org/jira/browse/HIVE-21830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864645#comment-16864645 ] Hive QA commented on HIVE-21830: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 55s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 29s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 14s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 56s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 44s{color} | {color:blue} standalone-metastore/metastore-common in master has 31 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 16s{color} | {color:blue} standalone-metastore/metastore-server in master has 184 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 16s{color} | {color:blue} ql in master has 2260 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 32s{color} | {color:blue} accumulo-handler in master has 21 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 30s{color} | {color:blue} druid-handler in master has 3 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 33s{color} | {color:blue} hbase-handler in master has 15 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 7s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 16s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 52s{color} | {color:red} ql: The patch generated 5 new + 772 unchanged - 59 fixed = 777 total (was 831) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 48s{color} | {color:green} metastore-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 25s{color} | {color:green} metastore-server in the patch passed. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 38s{color} | {color:green} ql generated 0 new + 2255 unchanged - 5 fixed = 2255 total (was 2260) {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 41s{color} | {color:green} accumulo-handler in the patch passed. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 39s{color} | {color:green} druid-handler in the patch passed. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 41s{color} | {color:green} hbase-handler in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 9s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 53m 10s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Updated] (HIVE-21830) Break up DDLTask - extract rest of the Alter Table operations
[ https://issues.apache.org/jira/browse/HIVE-21830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely updated HIVE-21830: -- Status: Patch Available (was: Open) > Break up DDLTask - extract rest of the Alter Table operations > - > > Key: HIVE-21830 > URL: https://issues.apache.org/jira/browse/HIVE-21830 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: refactor-ddl > Fix For: 4.0.0 > > Attachments: HIVE-21830.01.patch, HIVE-21830.02.patch, > HIVE-21830.03.patch, HIVE-21830.04.patch, HIVE-21830.05.patch, > HIVE-21830.06.patch, HIVE-21830.07.patch, HIVE-21830.08.patch, > HIVE-21830.09.patch > > > DDLTask is a huge class, more than 5000 lines long. The related DDLWork is > also a huge class, which has a field for each DDL operation it supports. The > goal is to refactor these in order to have everything cut into more > handleable classes under the package org.apache.hadoop.hive.ql.exec.ddl: > * have a separate class for each operation > * have a package for each operation group (database ddl, table ddl, etc), so > the amount of classes under a package is more manageable > * make all the requests (DDLDesc subclasses) immutable > * DDLTask should be agnostic to the actual operations > * right now let's ignore the issue of having some operations handled by > DDLTask which are not actual DDL operations (lock, unlock, desc...) > In the interim time when there are two DDLTask and DDLWork classes in the > code base the new ones in the new package are called DDLTask2 and DDLWork2 > thus avoiding the usage of fully qualified class names where both the old and > the new classes are in use. > Step #10: extract the alter table operations that left from the old DDLTask, > and move them under the new packages. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21830) Break up DDLTask - extract rest of the Alter Table operations
[ https://issues.apache.org/jira/browse/HIVE-21830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely updated HIVE-21830: -- Attachment: (was: HIVE-21830.09.patch) > Break up DDLTask - extract rest of the Alter Table operations > - > > Key: HIVE-21830 > URL: https://issues.apache.org/jira/browse/HIVE-21830 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: refactor-ddl > Fix For: 4.0.0 > > Attachments: HIVE-21830.01.patch, HIVE-21830.02.patch, > HIVE-21830.03.patch, HIVE-21830.04.patch, HIVE-21830.05.patch, > HIVE-21830.06.patch, HIVE-21830.07.patch, HIVE-21830.08.patch, > HIVE-21830.09.patch > > > DDLTask is a huge class, more than 5000 lines long. The related DDLWork is > also a huge class, which has a field for each DDL operation it supports. The > goal is to refactor these in order to have everything cut into more > handleable classes under the package org.apache.hadoop.hive.ql.exec.ddl: > * have a separate class for each operation > * have a package for each operation group (database ddl, table ddl, etc), so > the amount of classes under a package is more manageable > * make all the requests (DDLDesc subclasses) immutable > * DDLTask should be agnostic to the actual operations > * right now let's ignore the issue of having some operations handled by > DDLTask which are not actual DDL operations (lock, unlock, desc...) > In the interim time when there are two DDLTask and DDLWork classes in the > code base the new ones in the new package are called DDLTask2 and DDLWork2 > thus avoiding the usage of fully qualified class names where both the old and > the new classes are in use. > Step #10: extract the alter table operations that left from the old DDLTask, > and move them under the new packages. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21830) Break up DDLTask - extract rest of the Alter Table operations
[ https://issues.apache.org/jira/browse/HIVE-21830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely updated HIVE-21830: -- Attachment: HIVE-21830.09.patch > Break up DDLTask - extract rest of the Alter Table operations > - > > Key: HIVE-21830 > URL: https://issues.apache.org/jira/browse/HIVE-21830 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: refactor-ddl > Fix For: 4.0.0 > > Attachments: HIVE-21830.01.patch, HIVE-21830.02.patch, > HIVE-21830.03.patch, HIVE-21830.04.patch, HIVE-21830.05.patch, > HIVE-21830.06.patch, HIVE-21830.07.patch, HIVE-21830.08.patch, > HIVE-21830.09.patch > > > DDLTask is a huge class, more than 5000 lines long. The related DDLWork is > also a huge class, which has a field for each DDL operation it supports. The > goal is to refactor these in order to have everything cut into more > handleable classes under the package org.apache.hadoop.hive.ql.exec.ddl: > * have a separate class for each operation > * have a package for each operation group (database ddl, table ddl, etc), so > the amount of classes under a package is more manageable > * make all the requests (DDLDesc subclasses) immutable > * DDLTask should be agnostic to the actual operations > * right now let's ignore the issue of having some operations handled by > DDLTask which are not actual DDL operations (lock, unlock, desc...) > In the interim time when there are two DDLTask and DDLWork classes in the > code base the new ones in the new package are called DDLTask2 and DDLWork2 > thus avoiding the usage of fully qualified class names where both the old and > the new classes are in use. > Step #10: extract the alter table operations that left from the old DDLTask, > and move them under the new packages. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21830) Break up DDLTask - extract rest of the Alter Table operations
[ https://issues.apache.org/jira/browse/HIVE-21830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Miklos Gergely updated HIVE-21830: -- Status: Open (was: Patch Available) > Break up DDLTask - extract rest of the Alter Table operations > - > > Key: HIVE-21830 > URL: https://issues.apache.org/jira/browse/HIVE-21830 > Project: Hive > Issue Type: Sub-task > Components: Hive >Affects Versions: 3.1.1 >Reporter: Miklos Gergely >Assignee: Miklos Gergely >Priority: Major > Labels: refactor-ddl > Fix For: 4.0.0 > > Attachments: HIVE-21830.01.patch, HIVE-21830.02.patch, > HIVE-21830.03.patch, HIVE-21830.04.patch, HIVE-21830.05.patch, > HIVE-21830.06.patch, HIVE-21830.07.patch, HIVE-21830.08.patch, > HIVE-21830.09.patch > > > DDLTask is a huge class, more than 5000 lines long. The related DDLWork is > also a huge class, which has a field for each DDL operation it supports. The > goal is to refactor these in order to have everything cut into more > handleable classes under the package org.apache.hadoop.hive.ql.exec.ddl: > * have a separate class for each operation > * have a package for each operation group (database ddl, table ddl, etc), so > the amount of classes under a package is more manageable > * make all the requests (DDLDesc subclasses) immutable > * DDLTask should be agnostic to the actual operations > * right now let's ignore the issue of having some operations handled by > DDLTask which are not actual DDL operations (lock, unlock, desc...) > In the interim time when there are two DDLTask and DDLWork classes in the > code base the new ones in the new package are called DDLTask2 and DDLWork2 > thus avoiding the usage of fully qualified class names where both the old and > the new classes are in use. > Step #10: extract the alter table operations that left from the old DDLTask, > and move them under the new packages. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21864) LlapBaseInputFormat#closeAll() throws ConcurrentModificationException
[ https://issues.apache.org/jira/browse/HIVE-21864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864625#comment-16864625 ] Hive QA commented on HIVE-21864: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12971865/HIVE-21864.4.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 16155 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.llap.cache.TestBuddyAllocator.testMTT[2] (batchId=350) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/17588/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/17588/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-17588/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12971865 - PreCommit-HIVE-Build > LlapBaseInputFormat#closeAll() throws ConcurrentModificationException > - > > Key: HIVE-21864 > URL: https://issues.apache.org/jira/browse/HIVE-21864 > Project: Hive > Issue Type: Bug > Components: llap >Affects Versions: 3.1.1 >Reporter: Shubham Chaurasia >Assignee: Shubham Chaurasia >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21864.1.patch, HIVE-21864.2.patch, > HIVE-21864.3.patch, HIVE-21864.4.patch > > Time Spent: 1h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21864) LlapBaseInputFormat#closeAll() throws ConcurrentModificationException
[ https://issues.apache.org/jira/browse/HIVE-21864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16864608#comment-16864608 ] Hive QA commented on HIVE-21864: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 4s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 0s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 29s{color} | {color:blue} llap-ext-client in master has 1 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 45s{color} | {color:blue} itests/hive-unit in master has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 32s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 19m 49s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.43-2+deb8u5 (2017-09-19) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-17588/dev-support/hive-personality.sh | | git revision | master / c6a2d79 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: llap-ext-client itests/hive-unit U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-17588/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > LlapBaseInputFormat#closeAll() throws ConcurrentModificationException > - > > Key: HIVE-21864 > URL: https://issues.apache.org/jira/browse/HIVE-21864 > Project: Hive > Issue Type: Bug > Components: llap >Affects Versions: 3.1.1 >Reporter: Shubham Chaurasia >Assignee: Shubham Chaurasia >Priority: Major > Labels: pull-request-available > Attachments: HIVE-21864.1.patch, HIVE-21864.2.patch, > HIVE-21864.3.patch, HIVE-21864.4.patch > > Time Spent: 1h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian JIRA (v7.6.3#76005)