[jira] [Resolved] (HIVE-27890) Tez Progress bar is not displayed in Beeline upon setting session level execution engine to Tez
[ https://issues.apache.org/jira/browse/HIVE-27890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-27890. - Fix Version/s: 4.1.0 Resolution: Fixed > Tez Progress bar is not displayed in Beeline upon setting session level > execution engine to Tez > --- > > Key: HIVE-27890 > URL: https://issues.apache.org/jira/browse/HIVE-27890 > Project: Hive > Issue Type: Bug > Components: Beeline >Reporter: Shivangi Jha >Assignee: Shivangi Jha >Priority: Major > Labels: pull-request-available > Fix For: 4.1.0 > > > When queries are executed through Beeline and the server-level execution > engine is configured to MapReduce (MR), while the session-level execution > engine is set to Tez, it has been observed that the Tez Progress bar is not > rendered in the output. > # When default engine was set to Tez in Hive conf. > ## With no session level changes in execution engine, progress bar is seen. > Default Engine=Tez, session level=Tez > ## When session level execution engine is set to MR, progress bar is not > seen. > Default Engine=Tez, session level=MR > # When default engine was set to MR in Hive conf. > ## When session level execution engine is set to Tez, progress bar is NOT > seen. > Default Engine=MR, session level=TEZ. > ## With no session level changes in execution engine. progress bar is not > seen. > Default Engine=MR, session level=MR > > Steps to Reproduce: > # Set default execution engine to MR. > # Start Beeline session for query execution. > # Run {{set hive.execution.engine=tez;}} > # Upon running a query, the Tez Progress bar is not displayed in the console. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-27890) Tez Progress bar is not displayed in Beeline upon setting session level execution engine to Tez
[ https://issues.apache.org/jira/browse/HIVE-27890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17792911#comment-17792911 ] Pravin Sinha commented on HIVE-27890: - Missed to add Jira id in the commit message. [~shivijha30], just a suggestion, please add Jira id in the commit message in the future commit. > Tez Progress bar is not displayed in Beeline upon setting session level > execution engine to Tez > --- > > Key: HIVE-27890 > URL: https://issues.apache.org/jira/browse/HIVE-27890 > Project: Hive > Issue Type: Bug > Components: Beeline >Reporter: Shivangi Jha >Assignee: Shivangi Jha >Priority: Major > Labels: pull-request-available > > When queries are executed through Beeline and the server-level execution > engine is configured to MapReduce (MR), while the session-level execution > engine is set to Tez, it has been observed that the Tez Progress bar is not > rendered in the output. > # When default engine was set to Tez in Hive conf. > ## With no session level changes in execution engine, progress bar is seen. > Default Engine=Tez, session level=Tez > ## When session level execution engine is set to MR, progress bar is not > seen. > Default Engine=Tez, session level=MR > # When default engine was set to MR in Hive conf. > ## When session level execution engine is set to Tez, progress bar is NOT > seen. > Default Engine=MR, session level=TEZ. > ## With no session level changes in execution engine. progress bar is not > seen. > Default Engine=MR, session level=MR > > Steps to Reproduce: > # Set default execution engine to MR. > # Start Beeline session for query execution. > # Run {{set hive.execution.engine=tez;}} > # Upon running a query, the Tez Progress bar is not displayed in the console. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27890) Tez Progress bar is not displayed in Beeline upon setting session level execution engine to Tez
[ https://issues.apache.org/jira/browse/HIVE-27890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha updated HIVE-27890: Summary: Tez Progress bar is not displayed in Beeline upon setting session level execution engine to Tez (was: Tez Progress Bar does not appear while setting session execution engine to Tez in Beeline) > Tez Progress bar is not displayed in Beeline upon setting session level > execution engine to Tez > --- > > Key: HIVE-27890 > URL: https://issues.apache.org/jira/browse/HIVE-27890 > Project: Hive > Issue Type: Bug > Components: Beeline >Reporter: Shivangi Jha >Assignee: Shivangi Jha >Priority: Major > Labels: pull-request-available > > When queries are executed through Beeline and the server-level execution > engine is configured to MapReduce (MR), while the session-level execution > engine is set to Tez, it has been observed that the Tez Progress bar is not > rendered in the output. > # When default engine was set to Tez in Hive conf. > ## With no session level changes in execution engine, progress bar is seen. > Default Engine=Tez, session level=Tez > ## When session level execution engine is set to MR, progress bar is not > seen. > Default Engine=Tez, session level=MR > # When default engine was set to MR in Hive conf. > ## When session level execution engine is set to Tez, progress bar is NOT > seen. > Default Engine=MR, session level=TEZ. > ## With no session level changes in execution engine. progress bar is not > seen. > Default Engine=MR, session level=MR > > Steps to Reproduce: > # Set default execution engine to MR. > # Start Beeline session for query execution. > # Run {{set hive.execution.engine=tez;}} > # Upon running a query, the Tez Progress bar is not displayed in the console. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-27890) Tez Progress Bar does not appear while setting session execution engine to Tez in Beeline
[ https://issues.apache.org/jira/browse/HIVE-27890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17792889#comment-17792889 ] Pravin Sinha commented on HIVE-27890: - Merged to master !! Thanks for the patch [~shivijha30] !! > Tez Progress Bar does not appear while setting session execution engine to > Tez in Beeline > - > > Key: HIVE-27890 > URL: https://issues.apache.org/jira/browse/HIVE-27890 > Project: Hive > Issue Type: Bug > Components: Beeline >Reporter: Shivangi Jha >Assignee: Shivangi Jha >Priority: Major > Labels: pull-request-available > > When queries are executed through Beeline and the server-level execution > engine is configured to MapReduce (MR), while the session-level execution > engine is set to Tez, it has been observed that the Tez Progress bar is not > rendered in the output. > # When default engine was set to Tez in Hive conf. > ## With no session level changes in execution engine, progress bar is seen. > Default Engine=Tez, session level=Tez > ## When session level execution engine is set to MR, progress bar is not > seen. > Default Engine=Tez, session level=MR > # When default engine was set to MR in Hive conf. > ## When session level execution engine is set to Tez, progress bar is NOT > seen. > Default Engine=MR, session level=TEZ. > ## With no session level changes in execution engine. progress bar is not > seen. > Default Engine=MR, session level=MR > > Steps to Reproduce: > # Set default execution engine to MR. > # Start Beeline session for query execution. > # Run {{set hive.execution.engine=tez;}} > # Upon running a query, the Tez Progress bar is not displayed in the console. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-27169) New Locked List to prevent configuration change at runtime without throwing error
[ https://issues.apache.org/jira/browse/HIVE-27169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17775816#comment-17775816 ] Pravin Sinha commented on HIVE-27169: - Merged to the master. Thanks for the patch [~Aggarwal_Raghav] and review [~okumin] !! > New Locked List to prevent configuration change at runtime without throwing > error > - > > Key: HIVE-27169 > URL: https://issues.apache.org/jira/browse/HIVE-27169 > Project: Hive > Issue Type: Improvement >Affects Versions: 4.0.0-alpha-2 >Reporter: Raghav Aggarwal >Assignee: Raghav Aggarwal >Priority: Minor > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > _*AIM*_ > Create a new locked list called _*hive.conf.locked.list*_ which contains > comma separated configuration that can't be changed during runtime. If > someone try to change them at runtime then it will give WARN log on beeline > itself and will not change that config. > > _*How is it different from Restricted List?*_ > When running hql file or at runtime, if a configuration present in restricted > list get updated then it will throw error and won't proceed with further > execution of hql file. > With locked list, the configuration that is getting updated will throw > _*WARN*_ log on beeline and will continue to execute the hql file. > > _*Why is it required?*_ > In organisations, admin want to enforce some configs which user shouldn't be > able to change at runtime and it shouldn't affect user's existing hql > scripts. Therefore, this locked list will be useful as it will not allow user > to change the value of particular configs and it will also not stop the > execution of hql scripts. > > {_}*NOTE*{_}: Only at cluster level _*hive.conf.locked.list*_ can be set and > after that the hive service needs to be restarted. > This will be very helpful when organisations are migrating from Hive 1.x, > Hive2.x to higher version and admin want to enforce some configuration which > should remain constant. > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HIVE-27169) New Locked List to prevent configuration change at runtime without throwing error
[ https://issues.apache.org/jira/browse/HIVE-27169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-27169. - Resolution: Fixed > New Locked List to prevent configuration change at runtime without throwing > error > - > > Key: HIVE-27169 > URL: https://issues.apache.org/jira/browse/HIVE-27169 > Project: Hive > Issue Type: Improvement >Affects Versions: 4.0.0-alpha-2 >Reporter: Raghav Aggarwal >Assignee: Raghav Aggarwal >Priority: Minor > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > _*AIM*_ > Create a new locked list called _*hive.conf.locked.list*_ which contains > comma separated configuration that can't be changed during runtime. If > someone try to change them at runtime then it will give WARN log on beeline > itself and will not change that config. > > _*How is it different from Restricted List?*_ > When running hql file or at runtime, if a configuration present in restricted > list get updated then it will throw error and won't proceed with further > execution of hql file. > With locked list, the configuration that is getting updated will throw > _*WARN*_ log on beeline and will continue to execute the hql file. > > _*Why is it required?*_ > In organisations, admin want to enforce some configs which user shouldn't be > able to change at runtime and it shouldn't affect user's existing hql > scripts. Therefore, this locked list will be useful as it will not allow user > to change the value of particular configs and it will also not stop the > execution of hql scripts. > > {_}*NOTE*{_}: Only at cluster level _*hive.conf.locked.list*_ can be set and > after that the hive service needs to be restarted. > This will be very helpful when organisations are migrating from Hive 1.x, > Hive2.x to higher version and admin want to enforce some configuration which > should remain constant. > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27782) HS2 crashes with OOM even though there is enough heap space
Pravin Sinha created HIVE-27782: --- Summary: HS2 crashes with OOM even though there is enough heap space Key: HIVE-27782 URL: https://issues.apache.org/jira/browse/HIVE-27782 Project: Hive Issue Type: Bug Reporter: Pravin Sinha Assignee: Pravin Sinha -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27781) HMS crashes with OOM even though there is enough heap space
Pravin Sinha created HIVE-27781: --- Summary: HMS crashes with OOM even though there is enough heap space Key: HIVE-27781 URL: https://issues.apache.org/jira/browse/HIVE-27781 Project: Hive Issue Type: Bug Reporter: Pravin Sinha Assignee: Pravin Sinha -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HIVE-27584) Backport HIVE-21407 to branch-3
[ https://issues.apache.org/jira/browse/HIVE-27584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-27584. - Resolution: Fixed Committed to branch-3. Thanks for the review [~chinnalalam] !!! > Backport HIVE-21407 to branch-3 > --- > > Key: HIVE-27584 > URL: https://issues.apache.org/jira/browse/HIVE-27584 > Project: Hive > Issue Type: Task >Reporter: Pravin Sinha >Assignee: Pravin Sinha >Priority: Major > Labels: pull-request-available > > HIVE-21407: Parquet predicate pushdown is not working correctly for char > column types (Marta Kuczora reviewed by Peter Vary) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27588) Char set for few columns in HMS schema for fresh install and upgrade case are not in sync
[ https://issues.apache.org/jira/browse/HIVE-27588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha updated HIVE-27588: Description: ||Table||Column||Fresh Install||Upgrade case|| |PARTITION_EVENTS|CAT_NAME|`CAT_NAME` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL|`CAT_NAME` varchar(256) DEFAULT NULL| |SERDES|DESCRIPTION|`DESCRIPTION` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL|`DESCRIPTION` varchar(4000) DEFAULT NULL| |SERDES|SERIALIZER_CLASS|`SERIALIZER_CLASS` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL|`SERIALIZER_CLASS` varchar(4000) DEFAULT NULL| There are few more... was: ||Table||Column||Fresh Install||Upgrade case|| |PARTITION_EVENTS|CAT_NAME|`CAT_NAME` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL|`CAT_NAME` varchar(256) DEFAULT NULL| |SERDES|DESCRIPTION|`DESCRIPTION` varchar(4000) DEFAULT NULL|`DESCRIPTION` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL| |SERDES|SERIALIZER_CLASS|`SERIALIZER_CLASS` varchar(4000) DEFAULT NULL|`SERIALIZER_CLASS` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL| There are few more... > Char set for few columns in HMS schema for fresh install and upgrade case > are not in sync > --- > > Key: HIVE-27588 > URL: https://issues.apache.org/jira/browse/HIVE-27588 > Project: Hive > Issue Type: Bug >Reporter: Pravin Sinha >Assignee: Pravin Sinha >Priority: Major > > > ||Table||Column||Fresh Install||Upgrade case|| > |PARTITION_EVENTS|CAT_NAME|`CAT_NAME` varchar(256) CHARACTER SET latin1 > COLLATE latin1_bin DEFAULT NULL|`CAT_NAME` varchar(256) DEFAULT NULL| > |SERDES|DESCRIPTION|`DESCRIPTION` varchar(4000) CHARACTER SET latin1 COLLATE > latin1_bin DEFAULT NULL|`DESCRIPTION` varchar(4000) DEFAULT NULL| > |SERDES|SERIALIZER_CLASS|`SERIALIZER_CLASS` varchar(4000) CHARACTER SET > latin1 COLLATE latin1_bin DEFAULT NULL|`SERIALIZER_CLASS` varchar(4000) > DEFAULT NULL| > There are few more... > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27588) Char set for few columns in HMS schema for fresh install and upgrade case are not in sync
[ https://issues.apache.org/jira/browse/HIVE-27588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha updated HIVE-27588: Summary: Char set for few columns in HMS schema for fresh install and upgrade case are not in sync (was: Char set for few columns in HMS schema for fresh install and upgrade case is not in sync) > Char set for few columns in HMS schema for fresh install and upgrade case > are not in sync > --- > > Key: HIVE-27588 > URL: https://issues.apache.org/jira/browse/HIVE-27588 > Project: Hive > Issue Type: Bug >Reporter: Pravin Sinha >Assignee: Pravin Sinha >Priority: Major > > > ||Table||Column||Fresh Install||Upgrade case|| > |PARTITION_EVENTS|CAT_NAME|`CAT_NAME` varchar(256) CHARACTER SET latin1 > COLLATE latin1_bin DEFAULT NULL|`CAT_NAME` varchar(256) DEFAULT NULL| > |SERDES|DESCRIPTION|`DESCRIPTION` varchar(4000) DEFAULT NULL|`DESCRIPTION` > varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL| > |SERDES|SERIALIZER_CLASS|`SERIALIZER_CLASS` varchar(4000) DEFAULT > NULL|`SERIALIZER_CLASS` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin > DEFAULT NULL| > There are few more... > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27588) Char set for few columns in HMS schema for fresh install and upgrade case is not in sync
[ https://issues.apache.org/jira/browse/HIVE-27588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha updated HIVE-27588: Description: ||Table||Column||Fresh Install||Upgrade case|| |PARTITION_EVENTS|CAT_NAME|`CAT_NAME` varchar(256) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL|`CAT_NAME` varchar(256) DEFAULT NULL| |SERDES|DESCRIPTION|`DESCRIPTION` varchar(4000) DEFAULT NULL|`DESCRIPTION` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL| |SERDES|SERIALIZER_CLASS|`SERIALIZER_CLASS` varchar(4000) DEFAULT NULL|`SERIALIZER_CLASS` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL| There are few more... > Char set for few columns in HMS schema for fresh install and upgrade case > is not in sync > -- > > Key: HIVE-27588 > URL: https://issues.apache.org/jira/browse/HIVE-27588 > Project: Hive > Issue Type: Bug >Reporter: Pravin Sinha >Assignee: Pravin Sinha >Priority: Major > > > ||Table||Column||Fresh Install||Upgrade case|| > |PARTITION_EVENTS|CAT_NAME|`CAT_NAME` varchar(256) CHARACTER SET latin1 > COLLATE latin1_bin DEFAULT NULL|`CAT_NAME` varchar(256) DEFAULT NULL| > |SERDES|DESCRIPTION|`DESCRIPTION` varchar(4000) DEFAULT NULL|`DESCRIPTION` > varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL| > |SERDES|SERIALIZER_CLASS|`SERIALIZER_CLASS` varchar(4000) DEFAULT > NULL|`SERIALIZER_CLASS` varchar(4000) CHARACTER SET latin1 COLLATE latin1_bin > DEFAULT NULL| > There are few more... > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27588) Char set for few columns in HMS schema for fresh install and upgrade case is not in sync
[ https://issues.apache.org/jira/browse/HIVE-27588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha updated HIVE-27588: Summary: Char set for few columns in HMS schema for fresh install and upgrade case is not in sync (was: Char set for few columns in HMS schema for fresh install and upgrade case in not in sync) > Char set for few columns in HMS schema for fresh install and upgrade case > is not in sync > -- > > Key: HIVE-27588 > URL: https://issues.apache.org/jira/browse/HIVE-27588 > Project: Hive > Issue Type: Bug >Reporter: Pravin Sinha >Assignee: Pravin Sinha >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27588) Char set for few columns in HMS schema for fresh install and upgrade case in not in sync
Pravin Sinha created HIVE-27588: --- Summary: Char set for few columns in HMS schema for fresh install and upgrade case in not in sync Key: HIVE-27588 URL: https://issues.apache.org/jira/browse/HIVE-27588 Project: Hive Issue Type: Bug Reporter: Pravin Sinha Assignee: Pravin Sinha -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Created] (HIVE-27584) Backport HIVE-21407 to branch-3
Pravin Sinha created HIVE-27584: --- Summary: Backport HIVE-21407 to branch-3 Key: HIVE-27584 URL: https://issues.apache.org/jira/browse/HIVE-27584 Project: Hive Issue Type: Task Reporter: Pravin Sinha Assignee: Pravin Sinha -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27584) Backport HIVE-21407 to branch-3
[ https://issues.apache.org/jira/browse/HIVE-27584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha updated HIVE-27584: Description: HIVE-21407: Parquet predicate pushdown is not working correctly for char column types (Marta Kuczora reviewed by Peter Vary) > Backport HIVE-21407 to branch-3 > --- > > Key: HIVE-27584 > URL: https://issues.apache.org/jira/browse/HIVE-27584 > Project: Hive > Issue Type: Task >Reporter: Pravin Sinha >Assignee: Pravin Sinha >Priority: Major > > HIVE-21407: Parquet predicate pushdown is not working correctly for char > column types (Marta Kuczora reviewed by Peter Vary) -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (HIVE-27505) Added batching with retry mechanism in Hive.getAllPartitionsOf API
[ https://issues.apache.org/jira/browse/HIVE-27505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha updated HIVE-27505: Issue Type: Improvement (was: Bug) > Added batching with retry mechanism in Hive.getAllPartitionsOf API > -- > > Key: HIVE-27505 > URL: https://issues.apache.org/jira/browse/HIVE-27505 > Project: Hive > Issue Type: Improvement >Reporter: Vikram Ahuja >Assignee: Vikram Ahuja >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > > Hive.getAllPartitionsOf() tries to list partitions in one go, it can cause > memory related issues with wide tables(>2000 columns) with large amount of > partitions when data is being transferred from HMS to HS2 using Thrift. > Optimise the flow to use PartitionIterable so that partitions can be fetched > in batches rather than one go. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-27505) Added batching with retry mechanism in Hive.getAllPartitionsOf API
[ https://issues.apache.org/jira/browse/HIVE-27505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17748839#comment-17748839 ] Pravin Sinha commented on HIVE-27505: - Merged to master. Thanks for the patch [~vikramahuja_] and review [~zhangbutao] !! > Added batching with retry mechanism in Hive.getAllPartitionsOf API > -- > > Key: HIVE-27505 > URL: https://issues.apache.org/jira/browse/HIVE-27505 > Project: Hive > Issue Type: Bug >Reporter: Vikram Ahuja >Assignee: Vikram Ahuja >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > > Hive.getAllPartitionsOf() tries to list partitions in one go, it can cause > memory related issues with wide tables(>2000 columns) with large amount of > partitions when data is being transferred from HMS to HS2 using Thrift. > Optimise the flow to use PartitionIterable so that partitions can be fetched > in batches rather than one go. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HIVE-27505) Added batching with retry mechanism in Hive.getAllPartitionsOf API
[ https://issues.apache.org/jira/browse/HIVE-27505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-27505. - Fix Version/s: 4.0.0 Release Note: Merged to master. Thanks for the patch Vikram Ahuja and review zhangbutao !! Resolution: Fixed > Added batching with retry mechanism in Hive.getAllPartitionsOf API > -- > > Key: HIVE-27505 > URL: https://issues.apache.org/jira/browse/HIVE-27505 > Project: Hive > Issue Type: Bug >Reporter: Vikram Ahuja >Assignee: Vikram Ahuja >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > > Hive.getAllPartitionsOf() tries to list partitions in one go, it can cause > memory related issues with wide tables(>2000 columns) with large amount of > partitions when data is being transferred from HMS to HS2 using Thrift. > Optimise the flow to use PartitionIterable so that partitions can be fetched > in batches rather than one go. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Resolved] (HIVE-27027) Upgrade jettison to 1.5.3 to fix CVE-2022-45693
[ https://issues.apache.org/jira/browse/HIVE-27027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-27027. - Resolution: Fixed Merged to master. Thanks for the patch, [~ssand] !!! > Upgrade jettison to 1.5.3 to fix CVE-2022-45693 > --- > > Key: HIVE-27027 > URL: https://issues.apache.org/jira/browse/HIVE-27027 > Project: Hive > Issue Type: Improvement >Reporter: Sand Shreeya >Assignee: Sand Shreeya >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-26383) OOM during join query
[ https://issues.apache.org/jira/browse/HIVE-26383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17565267#comment-17565267 ] Pravin Sinha commented on HIVE-26383: - Couldn't reproduce it if one of the JOINs is removed. {code:java} --inner join db1.tab1 a1 --on a5.csid = a1.csid {code} > OOM during join query > - > > Key: HIVE-26383 > URL: https://issues.apache.org/jira/browse/HIVE-26383 > Project: Hive > Issue Type: Bug >Reporter: Pravin Sinha >Priority: Major > > {code:java} > [ERROR] > org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[innerjoin_cal_with_insert] > Time elapsed: 100.73 s <<< ERROR! > java.lang.OutOfMemoryError: GC overhead limit exceeded > at java.util.HashMap.newTreeNode(HashMap.java:1784) > at java.util.HashMap$TreeNode.putTreeVal(HashMap.java:2029) > at java.util.HashMap.putVal(HashMap.java:639) > at java.util.HashMap.put(HashMap.java:613) > at java.util.HashSet.add(HashSet.java:220) > at > org.apache.hadoop.hive.ql.optimizer.calcite.stats.EstimateUniqueKeys.getUniqueKeys(EstimateUniqueKeys.java:229) > at > org.apache.hadoop.hive.ql.optimizer.calcite.stats.EstimateUniqueKeys.getUniqueKeys(EstimateUniqueKeys.java:304) > at > org.apache.hadoop.hive.ql.optimizer.calcite.stats.HiveRelMdRowCount.isKey(HiveRelMdRowCount.java:501) > at > org.apache.hadoop.hive.ql.optimizer.calcite.stats.HiveRelMdRowCount.analyzeJoinForPKFK(HiveRelMdRowCount.java:302) > at > org.apache.hadoop.hive.ql.optimizer.calcite.stats.HiveRelMdRowCount.getRowCount(HiveRelMdRowCount.java:102) > at GeneratedMetadataHandler_RowCount.getRowCount_$(Unknown Source) > at GeneratedMetadataHandler_RowCount.getRowCount(Unknown Source) > at > org.apache.calcite.rel.metadata.RelMetadataQuery.getRowCount(RelMetadataQuery.java:212) > at > org.apache.calcite.rel.rules.LoptOptimizeJoinRule.swapInputs(LoptOptimizeJoinRule.java:1882) > at > org.apache.calcite.rel.rules.LoptOptimizeJoinRule.createJoinSubtree(LoptOptimizeJoinRule.java:1756) > at > org.apache.calcite.rel.rules.LoptOptimizeJoinRule.addToTop(LoptOptimizeJoinRule.java:1233) > at > org.apache.calcite.rel.rules.LoptOptimizeJoinRule.addFactorToTree(LoptOptimizeJoinRule.java:927) > at > org.apache.calcite.rel.rules.LoptOptimizeJoinRule.createOrdering(LoptOptimizeJoinRule.java:728) > at > org.apache.calcite.rel.rules.LoptOptimizeJoinRule.findBestOrderings(LoptOptimizeJoinRule.java:459) > at > org.apache.calcite.rel.rules.LoptOptimizeJoinRule.onMatch(LoptOptimizeJoinRule.java:128) > at > org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333) > at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542) > at > org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407) > at > org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243) > at > org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127) > at > org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202) > at > org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.executeProgram(CalcitePlanner.java:2468) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.executeProgram(CalcitePlanner.java:2427) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.applyJoinOrderingTransform(CalcitePlanner.java:2193) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1750) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1605) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (HIVE-26383) OOM during join query
[ https://issues.apache.org/jira/browse/HIVE-26383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17564958#comment-17564958 ] Pravin Sinha edited comment on HIVE-26383 at 7/11/22 11:51 AM: --- [~asolimando] Haven't tried that, but will check. Earlier on all empty tables on older branch( branch-3.1 IIRC) which also had similar issue trimming did help, meaning the issue was no more reproducible. was (Author: pkumarsinha): [~asolimando] Haven't tried that, but will check. Earlier on all empty tables on older branch( branch-3.1 IIRC) which also had similar issue trimming did help. > OOM during join query > - > > Key: HIVE-26383 > URL: https://issues.apache.org/jira/browse/HIVE-26383 > Project: Hive > Issue Type: Bug >Reporter: Pravin Sinha >Priority: Major > > {code:java} > [ERROR] > org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[innerjoin_cal_with_insert] > Time elapsed: 100.73 s <<< ERROR! > java.lang.OutOfMemoryError: GC overhead limit exceeded > at java.util.HashMap.newTreeNode(HashMap.java:1784) > at java.util.HashMap$TreeNode.putTreeVal(HashMap.java:2029) > at java.util.HashMap.putVal(HashMap.java:639) > at java.util.HashMap.put(HashMap.java:613) > at java.util.HashSet.add(HashSet.java:220) > at > org.apache.hadoop.hive.ql.optimizer.calcite.stats.EstimateUniqueKeys.getUniqueKeys(EstimateUniqueKeys.java:229) > at > org.apache.hadoop.hive.ql.optimizer.calcite.stats.EstimateUniqueKeys.getUniqueKeys(EstimateUniqueKeys.java:304) > at > org.apache.hadoop.hive.ql.optimizer.calcite.stats.HiveRelMdRowCount.isKey(HiveRelMdRowCount.java:501) > at > org.apache.hadoop.hive.ql.optimizer.calcite.stats.HiveRelMdRowCount.analyzeJoinForPKFK(HiveRelMdRowCount.java:302) > at > org.apache.hadoop.hive.ql.optimizer.calcite.stats.HiveRelMdRowCount.getRowCount(HiveRelMdRowCount.java:102) > at GeneratedMetadataHandler_RowCount.getRowCount_$(Unknown Source) > at GeneratedMetadataHandler_RowCount.getRowCount(Unknown Source) > at > org.apache.calcite.rel.metadata.RelMetadataQuery.getRowCount(RelMetadataQuery.java:212) > at > org.apache.calcite.rel.rules.LoptOptimizeJoinRule.swapInputs(LoptOptimizeJoinRule.java:1882) > at > org.apache.calcite.rel.rules.LoptOptimizeJoinRule.createJoinSubtree(LoptOptimizeJoinRule.java:1756) > at > org.apache.calcite.rel.rules.LoptOptimizeJoinRule.addToTop(LoptOptimizeJoinRule.java:1233) > at > org.apache.calcite.rel.rules.LoptOptimizeJoinRule.addFactorToTree(LoptOptimizeJoinRule.java:927) > at > org.apache.calcite.rel.rules.LoptOptimizeJoinRule.createOrdering(LoptOptimizeJoinRule.java:728) > at > org.apache.calcite.rel.rules.LoptOptimizeJoinRule.findBestOrderings(LoptOptimizeJoinRule.java:459) > at > org.apache.calcite.rel.rules.LoptOptimizeJoinRule.onMatch(LoptOptimizeJoinRule.java:128) > at > org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333) > at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542) > at > org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407) > at > org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243) > at > org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127) > at > org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202) > at > org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.executeProgram(CalcitePlanner.java:2468) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.executeProgram(CalcitePlanner.java:2427) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.applyJoinOrderingTransform(CalcitePlanner.java:2193) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1750) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1605) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (HIVE-26383) OOM during join query
[ https://issues.apache.org/jira/browse/HIVE-26383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17564958#comment-17564958 ] Pravin Sinha commented on HIVE-26383: - [~asolimando] Haven't tried that, but will check. Earlier on all empty tables on older branch( branch-3.1 IIRC) which also had similar issue trimming did help. > OOM during join query > - > > Key: HIVE-26383 > URL: https://issues.apache.org/jira/browse/HIVE-26383 > Project: Hive > Issue Type: Bug >Reporter: Pravin Sinha >Priority: Major > > {code:java} > [ERROR] > org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[innerjoin_cal_with_insert] > Time elapsed: 100.73 s <<< ERROR! > java.lang.OutOfMemoryError: GC overhead limit exceeded > at java.util.HashMap.newTreeNode(HashMap.java:1784) > at java.util.HashMap$TreeNode.putTreeVal(HashMap.java:2029) > at java.util.HashMap.putVal(HashMap.java:639) > at java.util.HashMap.put(HashMap.java:613) > at java.util.HashSet.add(HashSet.java:220) > at > org.apache.hadoop.hive.ql.optimizer.calcite.stats.EstimateUniqueKeys.getUniqueKeys(EstimateUniqueKeys.java:229) > at > org.apache.hadoop.hive.ql.optimizer.calcite.stats.EstimateUniqueKeys.getUniqueKeys(EstimateUniqueKeys.java:304) > at > org.apache.hadoop.hive.ql.optimizer.calcite.stats.HiveRelMdRowCount.isKey(HiveRelMdRowCount.java:501) > at > org.apache.hadoop.hive.ql.optimizer.calcite.stats.HiveRelMdRowCount.analyzeJoinForPKFK(HiveRelMdRowCount.java:302) > at > org.apache.hadoop.hive.ql.optimizer.calcite.stats.HiveRelMdRowCount.getRowCount(HiveRelMdRowCount.java:102) > at GeneratedMetadataHandler_RowCount.getRowCount_$(Unknown Source) > at GeneratedMetadataHandler_RowCount.getRowCount(Unknown Source) > at > org.apache.calcite.rel.metadata.RelMetadataQuery.getRowCount(RelMetadataQuery.java:212) > at > org.apache.calcite.rel.rules.LoptOptimizeJoinRule.swapInputs(LoptOptimizeJoinRule.java:1882) > at > org.apache.calcite.rel.rules.LoptOptimizeJoinRule.createJoinSubtree(LoptOptimizeJoinRule.java:1756) > at > org.apache.calcite.rel.rules.LoptOptimizeJoinRule.addToTop(LoptOptimizeJoinRule.java:1233) > at > org.apache.calcite.rel.rules.LoptOptimizeJoinRule.addFactorToTree(LoptOptimizeJoinRule.java:927) > at > org.apache.calcite.rel.rules.LoptOptimizeJoinRule.createOrdering(LoptOptimizeJoinRule.java:728) > at > org.apache.calcite.rel.rules.LoptOptimizeJoinRule.findBestOrderings(LoptOptimizeJoinRule.java:459) > at > org.apache.calcite.rel.rules.LoptOptimizeJoinRule.onMatch(LoptOptimizeJoinRule.java:128) > at > org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333) > at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542) > at > org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407) > at > org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243) > at > org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127) > at > org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202) > at > org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.executeProgram(CalcitePlanner.java:2468) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.executeProgram(CalcitePlanner.java:2427) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.applyJoinOrderingTransform(CalcitePlanner.java:2193) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1750) > at > org.apache.hadoop.hive.ql.parse.CalcitePlanner$CalcitePlannerAction.apply(CalcitePlanner.java:1605) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Comment Edited] (HIVE-26383) OOM during join query
[ https://issues.apache.org/jira/browse/HIVE-26383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17564952#comment-17564952 ] Pravin Sinha edited comment on HIVE-26383 at 7/11/22 11:33 AM: --- The _*innerjoin_cal_with_insert.q*_ test file in the test doesn't exist in trunk code. The content used is as follows: {code:java} set hive.mapred.mode=nonstrict; set hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat;create database db1;create table db1.tab1 ( csid bigint, type_n string, c_c_a_c string, cn_i_g string, c_a_k_v string, i_p_m string, igd string, iec string, cavv string, ptt string, dev string, vtt string, apv string, apv_ENR string, mndm string, aamad string, pnwp string, ictch string, eie string, saie string, shipg_addr_ctry_nm string, vco_flow_type string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY"); create table db1.tab2 ( csid bigint, usr_sid bigint, hgp_CHNL_TYP_NM string, USR_PYMT_CHNL_TYP_NM string, dflt_iso_ctry_cd bigint, hgp_rjyat_NM string, hgp_nger_NM string, hgp_nger_CD string, PYMT_INSTRMT_pntz string, ptmp_TYP_CD string, ptmp_TYP_NM string, pcoit_Enroll_TS string, pcoit_USER_giffyui string, tti_gra_ISO_rjyat_CD string, tti_gra_rjyat_NM string, ciic_rjyat_CD string, ciic_rjyat_NM string, ciic_nger_NM string, ciic_nger_CD string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY"); create table db1.tab3 ( csid bigint, usr_extrnl_id string, coreln_id string, usr_Acct_typ_nm string, tmpt_bejkl_pntz bigint, tyui_bejkl_pntz bigint, tmpt_bejkl_pntz_Intnt_tmpt_NM string, tyui_bejkl_pntz_Intnt_tyui_NM string, Intnt_Mrch_Ind string, tmpt_API_KEY_VAL string, Intnt_tmpt_NM string, vasterrgln_TYP_NM string, tmpt_ISO_rjyat_CD string, tmpt_rjyat_NM string, tmpt_nger_CD string, tmpt_nger_NM string, ctghi_GUID string, ctghi_LGL_NM string, ctghi_TRD_NM string, ctghi_rjyat_NM string, ctghi_RGN_NM string, ctghi_RGN_CD string, ctghi_rjyat_CD string, Intnt_Mrch_Version string, thtjl_gtslmnbprg_gst string, tmpt_RM string, tmpt_rjyat_CD string, tmpt_LGL_NM string, tmpt_oeiw string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY");create table db1.tab4 ( csid bigint, pymt_instrmt_extrnl_id string, paresStatus string, xid string, Erica string, hgty_BIN string, ISS_gst string, ISSUER_BID string, hgty_rjyat_NM string, hgty_rjyat_CD int, hgty_nger_NM string, hgty_nger_CD string, ISS_HOLDG_BID_gst string, ISS_HOLDING_BID int, ptmp_BRND_CD string, ACCT_NUM string, RWRD_PGM_ID string, RWRD_PGM_NM string, RPIN_RLLUP_CD string, tti_gra_PSTL_CD string, tti_gra_PRVNC_CD string, tkarh string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY");create table db1.tab5 ( csid bigint, latest_ts string, Intnt_Month string, CORELN_ID string, gtlop_Flag string, gtlop_Date string, gtlop_ts string, krontron_Flag string, ERROR_giffyui string, UPDATED_giffyui string, Abandoned_Flag string, TOKENIZED_giffyui string, thtjl_3DS_giffyui string, thtjl_CANCEL_giffyui string, thtjl_gftdrs_giffyui string, krontron_Date string, krontron_ts string, stgft_Flag string, stgft_Date string, stgft_ts string, I_ltt_mhy string, I_ltt_mhy_das string, C_ltt_mhy string, C_ltt_mhy_das string, S_ltt_mhy string, S_ltt_mhy_das string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY"); create table db1.tab6 ( csid bigint, browser_name string, browser_version string, browser_vendor string, OS string, grtprt_TYP_NM string, dceavt string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY"); create table db1.tab7 ( csid bigint, cmpgn_callr_nm string, cmpgn_callr_typ_nm string, cmpgn_chnl_nm string, cmpgn_extrnl_clnt_id string, cmpgn_flw_nm string, cmpgn_iso_ctry_cd bigint, cmpgn_ni_callr_nm string, cmpgn_ni_campgn_id string, cmpgn_ni_cmpgn_nm string, cmpgn_ni_flw_nm string, cmpgn_ni_chnl_nm string, cmpgn_ni_plcmnt_id string, cmpgn_ni_prtflo_nm string, cmpgn_ni_seg_nm string, cmpgn_ni_site_id string, cmpgn_rdrct_url_addr string, cmpgn_usr_agnt_nm string, cmpgn_clnt_api_key_id string, CRDNL_vptpspgp_ID string ) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY"); create table db1.tab8 ( csid bigint, ino_src_client_name string, ino_srci_clnt_id string, ino_usr_extrnl_id_typ_nm string, ino_src_ntwrk_nm string, ino_crptgm_typ string, ino_client_typ_nm string, ino_legal_name string, ino_trade_name string, pyt_upd_tran_typ_cd string, pyt_upd_success_flg string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY");insert into db1.tab1 values (1, '109515', null, 'test', 'test', '2018-01-10 15:03:55.0', '2018-01-10', 109515, null, '45045501', 'id', null,'test', 'test', '2018-01-10 15:03:55.0', '2018-01-10', 109515, null, '45045501', 'id', null,null); insert into db1.tab2 values (1, '109515', '11', 'test', '2018-01-10 15:03:55.0', '2018-01-10', 109515, null, '45045501', 'id', null,'test', 'test', '2018-01-10 15:03:55.0', '2018-01-10', 109515,
[jira] [Commented] (HIVE-26383) OOM during join query
[ https://issues.apache.org/jira/browse/HIVE-26383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17564952#comment-17564952 ] Pravin Sinha commented on HIVE-26383: - The .q file in the test doesn't exist in trunk code. The content used is as follows: {code:java} set hive.mapred.mode=nonstrict; set hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat;create database db1;create table db1.tab1 ( csid bigint, type_n string, c_c_a_c string, cn_i_g string, c_a_k_v string, i_p_m string, igd string, iec string, cavv string, ptt string, dev string, vtt string, apv string, apv_ENR string, mndm string, aamad string, pnwp string, ictch string, eie string, saie string, shipg_addr_ctry_nm string, vco_flow_type string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY"); create table db1.tab2 ( csid bigint, usr_sid bigint, hgp_CHNL_TYP_NM string, USR_PYMT_CHNL_TYP_NM string, dflt_iso_ctry_cd bigint, hgp_rjyat_NM string, hgp_nger_NM string, hgp_nger_CD string, PYMT_INSTRMT_pntz string, ptmp_TYP_CD string, ptmp_TYP_NM string, pcoit_Enroll_TS string, pcoit_USER_giffyui string, tti_gra_ISO_rjyat_CD string, tti_gra_rjyat_NM string, ciic_rjyat_CD string, ciic_rjyat_NM string, ciic_nger_NM string, ciic_nger_CD string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY"); create table db1.tab3 ( csid bigint, usr_extrnl_id string, coreln_id string, usr_Acct_typ_nm string, tmpt_bejkl_pntz bigint, tyui_bejkl_pntz bigint, tmpt_bejkl_pntz_Intnt_tmpt_NM string, tyui_bejkl_pntz_Intnt_tyui_NM string, Intnt_Mrch_Ind string, tmpt_API_KEY_VAL string, Intnt_tmpt_NM string, vasterrgln_TYP_NM string, tmpt_ISO_rjyat_CD string, tmpt_rjyat_NM string, tmpt_nger_CD string, tmpt_nger_NM string, ctghi_GUID string, ctghi_LGL_NM string, ctghi_TRD_NM string, ctghi_rjyat_NM string, ctghi_RGN_NM string, ctghi_RGN_CD string, ctghi_rjyat_CD string, Intnt_Mrch_Version string, thtjl_gtslmnbprg_gst string, tmpt_RM string, tmpt_rjyat_CD string, tmpt_LGL_NM string, tmpt_oeiw string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY");create table db1.tab4 ( csid bigint, pymt_instrmt_extrnl_id string, paresStatus string, xid string, Erica string, hgty_BIN string, ISS_gst string, ISSUER_BID string, hgty_rjyat_NM string, hgty_rjyat_CD int, hgty_nger_NM string, hgty_nger_CD string, ISS_HOLDG_BID_gst string, ISS_HOLDING_BID int, ptmp_BRND_CD string, ACCT_NUM string, RWRD_PGM_ID string, RWRD_PGM_NM string, RPIN_RLLUP_CD string, tti_gra_PSTL_CD string, tti_gra_PRVNC_CD string, tkarh string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY");create table db1.tab5 ( csid bigint, latest_ts string, Intnt_Month string, CORELN_ID string, gtlop_Flag string, gtlop_Date string, gtlop_ts string, krontron_Flag string, ERROR_giffyui string, UPDATED_giffyui string, Abandoned_Flag string, TOKENIZED_giffyui string, thtjl_3DS_giffyui string, thtjl_CANCEL_giffyui string, thtjl_gftdrs_giffyui string, krontron_Date string, krontron_ts string, stgft_Flag string, stgft_Date string, stgft_ts string, I_ltt_mhy string, I_ltt_mhy_das string, C_ltt_mhy string, C_ltt_mhy_das string, S_ltt_mhy string, S_ltt_mhy_das string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY"); create table db1.tab6 ( csid bigint, browser_name string, browser_version string, browser_vendor string, OS string, grtprt_TYP_NM string, dceavt string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY"); create table db1.tab7 ( csid bigint, cmpgn_callr_nm string, cmpgn_callr_typ_nm string, cmpgn_chnl_nm string, cmpgn_extrnl_clnt_id string, cmpgn_flw_nm string, cmpgn_iso_ctry_cd bigint, cmpgn_ni_callr_nm string, cmpgn_ni_campgn_id string, cmpgn_ni_cmpgn_nm string, cmpgn_ni_flw_nm string, cmpgn_ni_chnl_nm string, cmpgn_ni_plcmnt_id string, cmpgn_ni_prtflo_nm string, cmpgn_ni_seg_nm string, cmpgn_ni_site_id string, cmpgn_rdrct_url_addr string, cmpgn_usr_agnt_nm string, cmpgn_clnt_api_key_id string, CRDNL_vptpspgp_ID string ) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY"); create table db1.tab8 ( csid bigint, ino_src_client_name string, ino_srci_clnt_id string, ino_usr_extrnl_id_typ_nm string, ino_src_ntwrk_nm string, ino_crptgm_typ string, ino_client_typ_nm string, ino_legal_name string, ino_trade_name string, pyt_upd_tran_typ_cd string, pyt_upd_success_flg string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY");insert into db1.tab1 values (1, '109515', null, 'test', 'test', '2018-01-10 15:03:55.0', '2018-01-10', 109515, null, '45045501', 'id', null,'test', 'test', '2018-01-10 15:03:55.0', '2018-01-10', 109515, null, '45045501', 'id', null,null); insert into db1.tab2 values (1, '109515', '11', 'test', '2018-01-10 15:03:55.0', '2018-01-10', 109515, null, '45045501', 'id', null,'test', 'test', '2018-01-10 15:03:55.0', '2018-01-10', 109515, null, '45045501', 'id'); insert into db1.tab3 values (1, '109515', '11', 'test',
[jira] (HIVE-26383) OOM during join query
[ https://issues.apache.org/jira/browse/HIVE-26383 ] Pravin Sinha deleted comment on HIVE-26383: - was (Author: pkumarsinha): The query file 'innerjoin_cal_with_insert' doesn't exist in trunk code. The content is as follows: {code:java} set hive.mapred.mode=nonstrict; set hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat;create database db1;create table db1.tab1 ( csid bigint, type_n string, c_c_a_c string, cn_i_g string, c_a_k_v string, i_p_m string, igd string, iec string, cavv string, ptt string, dev string, vtt string, apv string, apv_ENR string, mndm string, aamad string, pnwp string, ictch string, eie string, saie string, shipg_addr_ctry_nm string, vco_flow_type string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY"); create table db1.tab2 ( csid bigint, usr_sid bigint, ENRLL_CHNL_TYP_NM string, USR_PYMT_CHNL_TYP_NM string, dflt_iso_ctry_cd bigint, ENRLL_CTRY_NM string, ENRLL_REGN_NM string, ENRLL_REGN_CD string, PYMT_INSTRMT_pntz string, ptmp_TYP_CD string, ptmp_TYP_NM string, VCOP_Enroll_TS string, VCOP_USER_FLAG string, tti_gra_ISO_CTRY_CD string, tti_gra_CTRY_NM string, ciic_CTRY_CD string, ciic_CTRY_NM string, ciic_REGN_NM string, ciic_REGN_CD string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY"); create table db1.tab3 ( csid bigint, usr_extrnl_id string, coreln_id string, usr_Acct_typ_nm string, tmpt_bejkl_pntz bigint, tyui_bejkl_pntz bigint, tmpt_bejkl_pntz_Intnt_tmpt_NM string, tyui_bejkl_pntz_Intnt_tyui_NM string, Intnt_Mrch_Ind string, tmpt_API_KEY_VAL string, Intnt_tmpt_NM string, CLNT_TYP_NM string, tmpt_ISO_CTRY_CD string, tmpt_CTRY_NM string, tmpt_REGN_CD string, tmpt_REGN_NM string, ctghi_GUID string, ctghi_LGL_NM string, ctghi_TRD_NM string, ctghi_CTRY_NM string, ctghi_RGN_NM string, ctghi_RGN_CD string, ctghi_CTRY_CD string, Intnt_Mrch_Version string, INTENT_gtslmnbprg_gst string, tmpt_RM string, tmpt_CTRY_CD string, tmpt_LGL_NM string, tmpt_oeiw string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY");create table db1.tab4 ( csid bigint, pymt_instrmt_extrnl_id string, paresStatus string, xid string, Erica string, hgty_BIN string, ISS_gst string, ISSUER_BID string, hgty_CTRY_NM string, hgty_CTRY_CD int, hgty_REGN_NM string, hgty_REGN_CD string, ISS_HOLDG_BID_gst string, ISS_HOLDING_BID int, ptmp_BRND_CD string, ACCT_NUM string, RWRD_PGM_ID string, RWRD_PGM_NM string, RPIN_RLLUP_CD string, tti_gra_PSTL_CD string, tti_gra_PRVNC_CD string, tkarh string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY");create table db1.tab5 ( csid bigint, latest_ts string, Intnt_Month string, CORELN_ID string, Initiated_Flag string, Initiated_Date string, Initiated_ts string, Completed_Flag string, ERROR_FLAG string, UPDATED_FLAG string, Abandoned_Flag string, TOKENIZED_FLAG string, INTENT_3DS_FLAG string, INTENT_CANCEL_FLAG string, INTENT_FAILURE_FLAG string, Completed_Date string, Completed_ts string, Successful_Flag string, Successful_Date string, Successful_ts string, I_TTL_AMT string, I_TTL_AMT_USD string, C_TTL_AMT string, C_TTL_AMT_USD string, S_TTL_AMT string, S_TTL_AMT_USD string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY"); create table db1.tab6 ( csid bigint, browser_name string, browser_version string, browser_vendor string, OS string, DVC_TYP_NM string, dceavt string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY"); create table db1.tab7 ( csid bigint, cmpgn_callr_nm string, cmpgn_callr_typ_nm string, cmpgn_chnl_nm string, cmpgn_extrnl_clnt_id string, cmpgn_flw_nm string, cmpgn_iso_ctry_cd bigint, cmpgn_ni_callr_nm string, cmpgn_ni_campgn_id string, cmpgn_ni_cmpgn_nm string, cmpgn_ni_flw_nm string, cmpgn_ni_chnl_nm string, cmpgn_ni_plcmnt_id string, cmpgn_ni_prtflo_nm string, cmpgn_ni_seg_nm string, cmpgn_ni_site_id string, cmpgn_rdrct_url_addr string, cmpgn_usr_agnt_nm string, cmpgn_clnt_api_key_id string, CRDNL_vptpspgp_ID string ) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY"); create table db1.tab8 ( csid bigint, ino_src_client_name string, ino_srci_clnt_id string, ino_usr_extrnl_id_typ_nm string, ino_src_ntwrk_nm string, ino_crptgm_typ string, ino_client_typ_nm string, ino_legal_name string, ino_trade_name string, pyt_upd_tran_typ_cd string, pyt_upd_success_flg string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY");insert into db1.tab1 values (1, '109515', null, 'test', 'test', '2018-01-10 15:03:55.0', '2018-01-10', 109515, null, '45045501', 'id', null,'test', 'test', '2018-01-10 15:03:55.0', '2018-01-10', 109515, null, '45045501', 'id', null,null); insert into db1.tab2 values (1, '109515', '11', 'test', '2018-01-10 15:03:55.0', '2018-01-10', 109515, null, '45045501', 'id', null,'test', 'test', '2018-01-10 15:03:55.0', '2018-01-10', 109515, null, '45045501', 'id'); insert into db1.tab3 values (1, '109515', '11', 'test', '2018-01-10 15:03:55.0', '2018-01-10', 109515, null,
[jira] [Commented] (HIVE-26383) OOM during join query
[ https://issues.apache.org/jira/browse/HIVE-26383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17564949#comment-17564949 ] Pravin Sinha commented on HIVE-26383: - The query file 'innerjoin_cal_with_insert' doesn't exist in trunk code. The content is as follows: {code:java} set hive.mapred.mode=nonstrict; set hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat;create database db1;create table db1.tab1 ( csid bigint, type_n string, c_c_a_c string, cn_i_g string, c_a_k_v string, i_p_m string, igd string, iec string, cavv string, ptt string, dev string, vtt string, apv string, apv_ENR string, mndm string, aamad string, pnwp string, ictch string, eie string, saie string, shipg_addr_ctry_nm string, vco_flow_type string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY"); create table db1.tab2 ( csid bigint, usr_sid bigint, ENRLL_CHNL_TYP_NM string, USR_PYMT_CHNL_TYP_NM string, dflt_iso_ctry_cd bigint, ENRLL_CTRY_NM string, ENRLL_REGN_NM string, ENRLL_REGN_CD string, PYMT_INSTRMT_pntz string, ptmp_TYP_CD string, ptmp_TYP_NM string, VCOP_Enroll_TS string, VCOP_USER_FLAG string, tti_gra_ISO_CTRY_CD string, tti_gra_CTRY_NM string, ciic_CTRY_CD string, ciic_CTRY_NM string, ciic_REGN_NM string, ciic_REGN_CD string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY"); create table db1.tab3 ( csid bigint, usr_extrnl_id string, coreln_id string, usr_Acct_typ_nm string, tmpt_bejkl_pntz bigint, tyui_bejkl_pntz bigint, tmpt_bejkl_pntz_Intnt_tmpt_NM string, tyui_bejkl_pntz_Intnt_tyui_NM string, Intnt_Mrch_Ind string, tmpt_API_KEY_VAL string, Intnt_tmpt_NM string, CLNT_TYP_NM string, tmpt_ISO_CTRY_CD string, tmpt_CTRY_NM string, tmpt_REGN_CD string, tmpt_REGN_NM string, ctghi_GUID string, ctghi_LGL_NM string, ctghi_TRD_NM string, ctghi_CTRY_NM string, ctghi_RGN_NM string, ctghi_RGN_CD string, ctghi_CTRY_CD string, Intnt_Mrch_Version string, INTENT_gtslmnbprg_gst string, tmpt_RM string, tmpt_CTRY_CD string, tmpt_LGL_NM string, tmpt_oeiw string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY");create table db1.tab4 ( csid bigint, pymt_instrmt_extrnl_id string, paresStatus string, xid string, Erica string, hgty_BIN string, ISS_gst string, ISSUER_BID string, hgty_CTRY_NM string, hgty_CTRY_CD int, hgty_REGN_NM string, hgty_REGN_CD string, ISS_HOLDG_BID_gst string, ISS_HOLDING_BID int, ptmp_BRND_CD string, ACCT_NUM string, RWRD_PGM_ID string, RWRD_PGM_NM string, RPIN_RLLUP_CD string, tti_gra_PSTL_CD string, tti_gra_PRVNC_CD string, tkarh string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY");create table db1.tab5 ( csid bigint, latest_ts string, Intnt_Month string, CORELN_ID string, Initiated_Flag string, Initiated_Date string, Initiated_ts string, Completed_Flag string, ERROR_FLAG string, UPDATED_FLAG string, Abandoned_Flag string, TOKENIZED_FLAG string, INTENT_3DS_FLAG string, INTENT_CANCEL_FLAG string, INTENT_FAILURE_FLAG string, Completed_Date string, Completed_ts string, Successful_Flag string, Successful_Date string, Successful_ts string, I_TTL_AMT string, I_TTL_AMT_USD string, C_TTL_AMT string, C_TTL_AMT_USD string, S_TTL_AMT string, S_TTL_AMT_USD string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY"); create table db1.tab6 ( csid bigint, browser_name string, browser_version string, browser_vendor string, OS string, DVC_TYP_NM string, dceavt string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY"); create table db1.tab7 ( csid bigint, cmpgn_callr_nm string, cmpgn_callr_typ_nm string, cmpgn_chnl_nm string, cmpgn_extrnl_clnt_id string, cmpgn_flw_nm string, cmpgn_iso_ctry_cd bigint, cmpgn_ni_callr_nm string, cmpgn_ni_campgn_id string, cmpgn_ni_cmpgn_nm string, cmpgn_ni_flw_nm string, cmpgn_ni_chnl_nm string, cmpgn_ni_plcmnt_id string, cmpgn_ni_prtflo_nm string, cmpgn_ni_seg_nm string, cmpgn_ni_site_id string, cmpgn_rdrct_url_addr string, cmpgn_usr_agnt_nm string, cmpgn_clnt_api_key_id string, CRDNL_vptpspgp_ID string ) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY"); create table db1.tab8 ( csid bigint, ino_src_client_name string, ino_srci_clnt_id string, ino_usr_extrnl_id_typ_nm string, ino_src_ntwrk_nm string, ino_crptgm_typ string, ino_client_typ_nm string, ino_legal_name string, ino_trade_name string, pyt_upd_tran_typ_cd string, pyt_upd_success_flg string) stored as PARQUET TBLPROPERTIES ("parquet.compress"="SNAPPY");insert into db1.tab1 values (1, '109515', null, 'test', 'test', '2018-01-10 15:03:55.0', '2018-01-10', 109515, null, '45045501', 'id', null,'test', 'test', '2018-01-10 15:03:55.0', '2018-01-10', 109515, null, '45045501', 'id', null,null); insert into db1.tab2 values (1, '109515', '11', 'test', '2018-01-10 15:03:55.0', '2018-01-10', 109515, null, '45045501', 'id', null,'test', 'test', '2018-01-10 15:03:55.0', '2018-01-10', 109515, null, '45045501', 'id'); insert into db1.tab3 values (1, '109515', '11',
[jira] [Commented] (HIVE-25756) Fix replication metrics backward compatibility issue.
[ https://issues.apache.org/jira/browse/HIVE-25756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17461222#comment-17461222 ] Pravin Sinha commented on HIVE-25756: - Committed to master. Thanks for the patch, [~haymant] !! > Fix replication metrics backward compatibility issue. > - > > Key: HIVE-25756 > URL: https://issues.apache.org/jira/browse/HIVE-25756 > Project: Hive > Issue Type: Bug >Reporter: Haymant Mangla >Assignee: Haymant Mangla >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (HIVE-25756) Fix replication metrics backward compatibility issue.
[ https://issues.apache.org/jira/browse/HIVE-25756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-25756. - Resolution: Fixed +1 > Fix replication metrics backward compatibility issue. > - > > Key: HIVE-25756 > URL: https://issues.apache.org/jira/browse/HIVE-25756 > Project: Hive > Issue Type: Bug >Reporter: Haymant Mangla >Assignee: Haymant Mangla >Priority: Major > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (HIVE-25708) Implement creation of table_diff
[ https://issues.apache.org/jira/browse/HIVE-25708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-25708. - Resolution: Fixed Committed to master !! Thanks for the patch [~ayushtkn] . > Implement creation of table_diff > > > Key: HIVE-25708 > URL: https://issues.apache.org/jira/browse/HIVE-25708 > Project: Hive > Issue Type: Sub-task >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > Generate table_diff file with the list of tables modified on cluster A after > the last successful loaded event id on B, which needs to be bootstrapped -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (HIVE-25708) Implement creation of table_diff
[ https://issues.apache.org/jira/browse/HIVE-25708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17461149#comment-17461149 ] Pravin Sinha commented on HIVE-25708: - +1 > Implement creation of table_diff > > > Key: HIVE-25708 > URL: https://issues.apache.org/jira/browse/HIVE-25708 > Project: Hive > Issue Type: Sub-task >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > Generate table_diff file with the list of tables modified on cluster A after > the last successful loaded event id on B, which needs to be bootstrapped -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Updated] (HIVE-25367) Fix TestReplicationScenariosAcidTables#testMultiDBTxn
[ https://issues.apache.org/jira/browse/HIVE-25367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha updated HIVE-25367: Description: [http://ci.hive.apache.org/job/hive-flaky-check/331] [http://ci.hive.apache.org/job/hive-flaky-check/332] CC: [~aasha] {color:#172b4d}Fix concurrency issue in TaskRunner {color} was: [http://ci.hive.apache.org/job/hive-flaky-check/331] [http://ci.hive.apache.org/job/hive-flaky-check/332] CC: [~aasha] > Fix TestReplicationScenariosAcidTables#testMultiDBTxn > - > > Key: HIVE-25367 > URL: https://issues.apache.org/jira/browse/HIVE-25367 > Project: Hive > Issue Type: Test > Components: repl >Reporter: Peter Vary >Assignee: Haymant Mangla >Priority: Major > Labels: pull-request-available > Time Spent: 1.5h > Remaining Estimate: 0h > > [http://ci.hive.apache.org/job/hive-flaky-check/331] > [http://ci.hive.apache.org/job/hive-flaky-check/332] > CC: [~aasha] > > {color:#172b4d}Fix concurrency issue in TaskRunner > {color} -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (HIVE-25742) Fix WriteOutput in Utils for Replication
[ https://issues.apache.org/jira/browse/HIVE-25742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-25742. - Resolution: Fixed Committed to master !!! Thanks for the patch, [~ayushtkn] !!! > Fix WriteOutput in Utils for Replication > > > Key: HIVE-25742 > URL: https://issues.apache.org/jira/browse/HIVE-25742 > Project: Hive > Issue Type: Bug >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > The present implementation uses {{IOUtils.closeStream(outStream);}} which > eats up any exception while closing the file, hence falsely conveying the > file has been successfully written in case there are any failures during > flushing of data during close or any other exception while concluding the > file. > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (HIVE-25609) Preserve XAttrs in normal file copy case.
[ https://issues.apache.org/jira/browse/HIVE-25609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-25609. - Resolution: Fixed Committed to master. Thanks for the patch [~haymant] !! > Preserve XAttrs in normal file copy case. > - > > Key: HIVE-25609 > URL: https://issues.apache.org/jira/browse/HIVE-25609 > Project: Hive > Issue Type: Improvement >Reporter: Haymant Mangla >Assignee: Haymant Mangla >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (HIVE-25700) Prevent deletion of Notification Events post restarts
[ https://issues.apache.org/jira/browse/HIVE-25700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-25700. - Resolution: Fixed Committed to master. Thanks for the patch, [~ayushtkn] !! > Prevent deletion of Notification Events post restarts > - > > Key: HIVE-25700 > URL: https://issues.apache.org/jira/browse/HIVE-25700 > Project: Hive > Issue Type: Sub-task >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > In case of DR scenarios, when Hive service goes down, Prevent deletion of > entries in the Notification Log immediately, Give time for ADMINs to > reconfigure properties to handle further Replication process. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (HIVE-25700) Prevent deletion of Notification Events post restarts
[ https://issues.apache.org/jira/browse/HIVE-25700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17446617#comment-17446617 ] Pravin Sinha commented on HIVE-25700: - +1 > Prevent deletion of Notification Events post restarts > - > > Key: HIVE-25700 > URL: https://issues.apache.org/jira/browse/HIVE-25700 > Project: Hive > Issue Type: Sub-task >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > In case of DR scenarios, when Hive service goes down, Prevent deletion of > entries in the Notification Log immediately, Give time for ADMINs to > reconfigure properties to handle further Replication process. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (HIVE-25596) Compress Hive Replication Metrics while storing
[ https://issues.apache.org/jira/browse/HIVE-25596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-25596. - Resolution: Fixed Committed to master !! Thanks for the patch, [~haymant] > Compress Hive Replication Metrics while storing > --- > > Key: HIVE-25596 > URL: https://issues.apache.org/jira/browse/HIVE-25596 > Project: Hive > Issue Type: Improvement >Reporter: Haymant Mangla >Assignee: Haymant Mangla >Priority: Major > Labels: pull-request-available > Attachments: CompressedRM_Progress(k=10), CompressedRM_Progress(k=5), > PlainTextRM_Progress(k=10), PlainTextRM_Progress(k=5) > > Time Spent: 6h > Remaining Estimate: 0h > > Compress the json fields of sys.replication_metrics table to optimise RDBMS > space usage. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Commented] (HIVE-25596) Compress Hive Replication Metrics while storing
[ https://issues.apache.org/jira/browse/HIVE-25596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17440903#comment-17440903 ] Pravin Sinha commented on HIVE-25596: - +1 > Compress Hive Replication Metrics while storing > --- > > Key: HIVE-25596 > URL: https://issues.apache.org/jira/browse/HIVE-25596 > Project: Hive > Issue Type: Improvement >Reporter: Haymant Mangla >Assignee: Haymant Mangla >Priority: Major > Labels: pull-request-available > Attachments: CompressedRM_Progress(k=10), CompressedRM_Progress(k=5), > PlainTextRM_Progress(k=10), PlainTextRM_Progress(k=5) > > Time Spent: 5h 50m > Remaining Estimate: 0h > > Compress the json fields of sys.replication_metrics table to optimise RDBMS > space usage. -- This message was sent by Atlassian Jira (v8.20.1#820001)
[jira] [Resolved] (HIVE-25387) Fix TestMiniLlapLocalCliDriver#replication_metrics_ingest.q
[ https://issues.apache.org/jira/browse/HIVE-25387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-25387. - Resolution: Fixed Committed to master. Thanks for the patch, [~haymant] and for the review, [~^sharma] !!! > Fix TestMiniLlapLocalCliDriver#replication_metrics_ingest.q > - > > Key: HIVE-25387 > URL: https://issues.apache.org/jira/browse/HIVE-25387 > Project: Hive > Issue Type: Test > Components: repl, Test >Reporter: Peter Vary >Assignee: Haymant Mangla >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > The test is flaky we need to fix it: > http://ci.hive.apache.org/job/hive-flaky-check/344/ > CC: [~aasha] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HIVE-25433) DistCp job fails with yarn map compatibility mode issue.
[ https://issues.apache.org/jira/browse/HIVE-25433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-25433. - Resolution: Fixed Committed to master. Thanks for the patch [~haymant] and for the review [~ayushtkn] !!! > DistCp job fails with yarn map compatibility mode issue. > > > Key: HIVE-25433 > URL: https://issues.apache.org/jira/browse/HIVE-25433 > Project: Hive > Issue Type: Bug >Reporter: Haymant Mangla >Assignee: Haymant Mangla >Priority: Critical > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-24766) Fix TestScheduledReplication
[ https://issues.apache.org/jira/browse/HIVE-24766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha reassigned HIVE-24766: --- Assignee: Haymant Mangla (was: Pratyush Madhukar) > Fix TestScheduledReplication > > > Key: HIVE-24766 > URL: https://issues.apache.org/jira/browse/HIVE-24766 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Haymant Mangla >Priority: Major > > test seems to be unstable > http://ci.hive.apache.org/job/hive-flaky-check/184/ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-25439) Make the DistCp stat csv content parse-able
[ https://issues.apache.org/jira/browse/HIVE-25439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha reassigned HIVE-25439: --- > Make the DistCp stat csv content parse-able > --- > > Key: HIVE-25439 > URL: https://issues.apache.org/jira/browse/HIVE-25439 > Project: Hive > Issue Type: Bug >Reporter: Pravin Sinha >Assignee: Pravin Sinha >Priority: Major > > The csv file generated by script replstats.sh isn't parse-able when the > number of bytes Copied is huge. The 'Bytes Copied' field itself can have > comma. E.g > {code:java} > #cat Repl#repl_testing20210802T153039308427#14711values.csv > job_1624306668424_194169,2-Aug-2021 20:20:41,2-Aug-2021 20:22:08,1mins, > 27sec,2-Aug-2021 20:22:29,21sec,1,0,112,527,514,1,SUCCEEDED > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-25205) Reduce overhead of adding write notification log during batch loading of partition..
[ https://issues.apache.org/jira/browse/HIVE-25205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17393615#comment-17393615 ] Pravin Sinha commented on HIVE-25205: - +1 > Reduce overhead of adding write notification log during batch loading of > partition.. > > > Key: HIVE-25205 > URL: https://issues.apache.org/jira/browse/HIVE-25205 > Project: Hive > Issue Type: Sub-task > Components: Hive, HiveServer2 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: performance > > During batch loading of partition the write notification logs are added for > each partition added. This is causing delay in execution as the call to HMS > is done for each partition. This can be optimised by adding a new API in HMS > to send a batch of partition and then this batch can be added together to the > backend database. Once we have a batch of notification log, at HMS side, code > can be optimised to add the logs using single call to backend RDBMS. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HIVE-25367) Fix TestReplicationScenariosAcidTables#testMultiDBTxn
[ https://issues.apache.org/jira/browse/HIVE-25367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-25367. - Resolution: Fixed Committed to master!!! Thanks for the patch [~haymant] , and for the review [~pvary] / [~ayushtkn] !!! > Fix TestReplicationScenariosAcidTables#testMultiDBTxn > - > > Key: HIVE-25367 > URL: https://issues.apache.org/jira/browse/HIVE-25367 > Project: Hive > Issue Type: Test > Components: repl >Reporter: Peter Vary >Assignee: Haymant Mangla >Priority: Major > Labels: pull-request-available > Time Spent: 1.5h > Remaining Estimate: 0h > > [http://ci.hive.apache.org/job/hive-flaky-check/331] > [http://ci.hive.apache.org/job/hive-flaky-check/332] > CC: [~aasha] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HIVE-24946) Handle failover case during Repl Load
[ https://issues.apache.org/jira/browse/HIVE-24946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-24946. - Resolution: Fixed Committed to master. Thanks for the contribution, [~haymant] !!! > Handle failover case during Repl Load > - > > Key: HIVE-24946 > URL: https://issues.apache.org/jira/browse/HIVE-24946 > Project: Hive > Issue Type: New Feature >Reporter: Haymant Mangla >Assignee: Haymant Mangla >Priority: Major > Labels: pull-request-available > Time Spent: 3h > Remaining Estimate: 0h > > To handle: > # Introduce two states of failover db property to denote nature of database > at the time failover was initiated. > # If failover start config is enabled and dump directory contains failover > marker file, then in incremental load as a preAckTask, we should > ## Remove repl.target.for from target db. > ## Set repl.failover.endpoint = "TARGET" > ## Updated the replication metrics saying that target cluster is failover > ready > # In the first dump operation in reverse direction, presence of failover > ready marker and repl.failover.endpoint = "TARGET" will be used as indicator > for bootstrap iteration. > # In any dump operation except the first dump operation in reverse dxn, if > repl.failover.endpoint is set for db and failover start config is set to > false, then remove this property. > # In incremental load, if the failover start config is disabled, then add > repl.target.for and remove repl.failover.endpoint if present. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-24946) Handle failover case during Repl Load
[ https://issues.apache.org/jira/browse/HIVE-24946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17391950#comment-17391950 ] Pravin Sinha commented on HIVE-24946: - +1, LGTM > Handle failover case during Repl Load > - > > Key: HIVE-24946 > URL: https://issues.apache.org/jira/browse/HIVE-24946 > Project: Hive > Issue Type: New Feature >Reporter: Haymant Mangla >Assignee: Haymant Mangla >Priority: Major > Labels: pull-request-available > Time Spent: 2h 50m > Remaining Estimate: 0h > > To handle: > # Introduce two states of failover db property to denote nature of database > at the time failover was initiated. > # If failover start config is enabled and dump directory contains failover > marker file, then in incremental load as a preAckTask, we should > ## Remove repl.target.for from target db. > ## Set repl.failover.endpoint = "TARGET" > ## Updated the replication metrics saying that target cluster is failover > ready > # In the first dump operation in reverse direction, presence of failover > ready marker and repl.failover.endpoint = "TARGET" will be used as indicator > for bootstrap iteration. > # In any dump operation except the first dump operation in reverse dxn, if > repl.failover.endpoint is set for db and failover start config is set to > false, then remove this property. > # In incremental load, if the failover start config is disabled, then add > repl.target.for and remove repl.failover.endpoint if present. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-25387) Fix TestMiniLlapLocalCliDriver#replication_metrics_ingest.q
[ https://issues.apache.org/jira/browse/HIVE-25387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha reassigned HIVE-25387: --- Assignee: Haymant Mangla > Fix TestMiniLlapLocalCliDriver#replication_metrics_ingest.q > - > > Key: HIVE-25387 > URL: https://issues.apache.org/jira/browse/HIVE-25387 > Project: Hive > Issue Type: Test > Components: repl, Test >Reporter: Peter Vary >Assignee: Haymant Mangla >Priority: Major > > The test is flaky we need to fix it: > http://ci.hive.apache.org/job/hive-flaky-check/344/ > CC: [~aasha] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HIVE-25137) getAllWriteEventInfo should go through the HMS client instead of using RawStore directly
[ https://issues.apache.org/jira/browse/HIVE-25137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-25137. - Resolution: Fixed Committed to master. Thanks for the patch [~hsnusonic] !!! > getAllWriteEventInfo should go through the HMS client instead of using > RawStore directly > > > Key: HIVE-25137 > URL: https://issues.apache.org/jira/browse/HIVE-25137 > Project: Hive > Issue Type: Improvement >Reporter: Pratyush Madhukar >Assignee: Yu-Wen Lai >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > {code:java} > private List getAllWriteEventInfo(Context withinContext) > throws Exception { > String contextDbName = > StringUtils.normalizeIdentifier(withinContext.replScope.getDbName()); > RawStore rawStore = > HiveMetaStore.HMSHandler.getMSForConf(withinContext.hiveConf); > List writeEventInfoList > = rawStore.getAllWriteEventInfo(eventMessage.getTxnId(), > contextDbName, null); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HIVE-24918) Handle failover case during Repl Dump
[ https://issues.apache.org/jira/browse/HIVE-24918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-24918. - Resolution: Fixed Committed to master. Thanks for the contribution, [~haymant] > Handle failover case during Repl Dump > - > > Key: HIVE-24918 > URL: https://issues.apache.org/jira/browse/HIVE-24918 > Project: Hive > Issue Type: New Feature >Reporter: Haymant Mangla >Assignee: Haymant Mangla >Priority: Major > Labels: pull-request-available > Time Spent: 7h 50m > Remaining Estimate: 0h > > To handle: > a) Whenever user wants to go ahead with failover, during the next or > subsequent repl dump operation upon confirming that there are no pending open > transaction events, It should create a _failover_ready marker file in the > dump dir. This marker file would contain scheduled query name > that has generated this dump. > b) Skip next repl dump instances once we have the marker file placed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-24918) Handle failover case during Repl Dump
[ https://issues.apache.org/jira/browse/HIVE-24918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17387043#comment-17387043 ] Pravin Sinha commented on HIVE-24918: - +1 LGTM > Handle failover case during Repl Dump > - > > Key: HIVE-24918 > URL: https://issues.apache.org/jira/browse/HIVE-24918 > Project: Hive > Issue Type: New Feature >Reporter: Haymant Mangla >Assignee: Haymant Mangla >Priority: Major > Labels: pull-request-available > Time Spent: 7h 40m > Remaining Estimate: 0h > > To handle: > a) Whenever user wants to go ahead with failover, during the next or > subsequent repl dump operation upon confirming that there are no pending open > transaction events, It should create a _failover_ready marker file in the > dump dir. This marker file would contain scheduled query name > that has generated this dump. > b) Skip next repl dump instances once we have the marker file placed. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-25367) Fix TestReplicationScenariosAcidTables#testMultiDBTxn
[ https://issues.apache.org/jira/browse/HIVE-25367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha reassigned HIVE-25367: --- Assignee: Haymant Mangla > Fix TestReplicationScenariosAcidTables#testMultiDBTxn > - > > Key: HIVE-25367 > URL: https://issues.apache.org/jira/browse/HIVE-25367 > Project: Hive > Issue Type: Test > Components: repl >Reporter: Peter Vary >Assignee: Haymant Mangla >Priority: Major > > [http://ci.hive.apache.org/job/hive-flaky-check/331] > [http://ci.hive.apache.org/job/hive-flaky-check/332] > CC: [~aasha] -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-25355) EXPLAIN statement for write transactions with hive.txn.readonly.enabled fails
[ https://issues.apache.org/jira/browse/HIVE-25355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha reassigned HIVE-25355: --- > EXPLAIN statement for write transactions with hive.txn.readonly.enabled fails > - > > Key: HIVE-25355 > URL: https://issues.apache.org/jira/browse/HIVE-25355 > Project: Hive > Issue Type: Bug >Reporter: Pravin Sinha >Assignee: Pravin Sinha >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-25137) getAllWriteEventInfo should go through the HMS client instead of using RawStore directly
[ https://issues.apache.org/jira/browse/HIVE-25137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17383509#comment-17383509 ] Pravin Sinha commented on HIVE-25137: - +1 > getAllWriteEventInfo should go through the HMS client instead of using > RawStore directly > > > Key: HIVE-25137 > URL: https://issues.apache.org/jira/browse/HIVE-25137 > Project: Hive > Issue Type: Improvement >Reporter: Pratyush Madhukar >Assignee: Yu-Wen Lai >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > {code:java} > private List getAllWriteEventInfo(Context withinContext) > throws Exception { > String contextDbName = > StringUtils.normalizeIdentifier(withinContext.replScope.getDbName()); > RawStore rawStore = > HiveMetaStore.HMSHandler.getMSForConf(withinContext.hiveConf); > List writeEventInfoList > = rawStore.getAllWriteEventInfo(eventMessage.getTxnId(), > contextDbName, null); > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-25330) Make FS calls in CopyUtils retryable
[ https://issues.apache.org/jira/browse/HIVE-25330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17381180#comment-17381180 ] Pravin Sinha commented on HIVE-25330: - One such trace is this: {code:java} 2021-07-09 03:34:30,643 ERROR org.apache.hadoop.hive.ql.exec.ReplCopyTask: [Thread-98208]: java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:477) at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1685) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1745) at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1742) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1757) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1738) at org.apache.hadoop.hive.ql.parse.repl.CopyUtils.doCopy(CopyUtils.java:154) at org.apache.hadoop.hive.ql.parse.repl.CopyUtils.copyAndVerify(CopyUtils.java:114) at org.apache.hadoop.hive.ql.exec.ReplCopyTask.execute(ReplCopyTask.java:155) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:213) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:105) at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:83){code} > Make FS calls in CopyUtils retryable > > > Key: HIVE-25330 > URL: https://issues.apache.org/jira/browse/HIVE-25330 > Project: Hive > Issue Type: Improvement >Reporter: Pravin Sinha >Assignee: Haymant Mangla >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-25330) Make FS calls in CopyUtils retryable
[ https://issues.apache.org/jira/browse/HIVE-25330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha reassigned HIVE-25330: --- Assignee: Haymant Mangla (was: Pravin Sinha) > Make FS calls in CopyUtils retryable > > > Key: HIVE-25330 > URL: https://issues.apache.org/jira/browse/HIVE-25330 > Project: Hive > Issue Type: Improvement >Reporter: Pravin Sinha >Assignee: Haymant Mangla >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HIVE-25246) Fix the clean up of open repl created transactions
[ https://issues.apache.org/jira/browse/HIVE-25246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-25246. - Resolution: Fixed Committed to master. Thanks for the patch, [~haymant] !! > Fix the clean up of open repl created transactions > -- > > Key: HIVE-25246 > URL: https://issues.apache.org/jira/browse/HIVE-25246 > Project: Hive > Issue Type: Improvement >Reporter: Haymant Mangla >Assignee: Haymant Mangla >Priority: Major > Labels: pull-request-available > Time Spent: 5h 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-25246) Fix the clean up of open repl created transactions
[ https://issues.apache.org/jira/browse/HIVE-25246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17381053#comment-17381053 ] Pravin Sinha commented on HIVE-25246: - +1 > Fix the clean up of open repl created transactions > -- > > Key: HIVE-25246 > URL: https://issues.apache.org/jira/browse/HIVE-25246 > Project: Hive > Issue Type: Improvement >Reporter: Haymant Mangla >Assignee: Haymant Mangla >Priority: Major > Labels: pull-request-available > Time Spent: 5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-25330) Make FS calls in CopyUtils retryable
[ https://issues.apache.org/jira/browse/HIVE-25330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha reassigned HIVE-25330: --- > Make FS calls in CopyUtils retryable > > > Key: HIVE-25330 > URL: https://issues.apache.org/jira/browse/HIVE-25330 > Project: Hive > Issue Type: Improvement >Reporter: Pravin Sinha >Assignee: Pravin Sinha >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-25267) Fix TestReplicationScenariosAcidTables
[ https://issues.apache.org/jira/browse/HIVE-25267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha updated HIVE-25267: Resolution: Fixed Status: Resolved (was: Patch Available) Committed to master. Thanks for the review, [~anishek] > Fix TestReplicationScenariosAcidTables > -- > > Key: HIVE-25267 > URL: https://issues.apache.org/jira/browse/HIVE-25267 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Pravin Sinha >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > test is unstable > http://ci.hive.apache.org/job/hive-flaky-check/242/ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-25267) Fix TestReplicationScenariosAcidTables
[ https://issues.apache.org/jira/browse/HIVE-25267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17377904#comment-17377904 ] Pravin Sinha commented on HIVE-25267: - No more failures: http://ci.hive.apache.org/job/hive-flaky-check/302/ > Fix TestReplicationScenariosAcidTables > -- > > Key: HIVE-25267 > URL: https://issues.apache.org/jira/browse/HIVE-25267 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Pravin Sinha >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > test is unstable > http://ci.hive.apache.org/job/hive-flaky-check/242/ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-25305) Replayed transactions are not cleaned up properly on open txn timeout
[ https://issues.apache.org/jira/browse/HIVE-25305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha reassigned HIVE-25305: --- > Replayed transactions are not cleaned up properly on open txn timeout > --- > > Key: HIVE-25305 > URL: https://issues.apache.org/jira/browse/HIVE-25305 > Project: Hive > Issue Type: Bug >Reporter: Pravin Sinha >Assignee: Pravin Sinha >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-25267) Fix TestReplicationScenariosAcidTables
[ https://issues.apache.org/jira/browse/HIVE-25267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha updated HIVE-25267: Status: Patch Available (was: Open) > Fix TestReplicationScenariosAcidTables > -- > > Key: HIVE-25267 > URL: https://issues.apache.org/jira/browse/HIVE-25267 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Pravin Sinha >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > test is unstable > http://ci.hive.apache.org/job/hive-flaky-check/242/ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-25267) Fix TestReplicationScenariosAcidTables
[ https://issues.apache.org/jira/browse/HIVE-25267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha reassigned HIVE-25267: --- Assignee: Pravin Sinha > Fix TestReplicationScenariosAcidTables > -- > > Key: HIVE-25267 > URL: https://issues.apache.org/jira/browse/HIVE-25267 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Pravin Sinha >Priority: Major > > test is unstable > http://ci.hive.apache.org/job/hive-flaky-check/242/ -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-25164) Execute Bootstrap REPL load DDL tasks in parallel
[ https://issues.apache.org/jira/browse/HIVE-25164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17372162#comment-17372162 ] Pravin Sinha commented on HIVE-25164: - Committed to master. Thanks for the review, [~aasha] !! > Execute Bootstrap REPL load DDL tasks in parallel > - > > Key: HIVE-25164 > URL: https://issues.apache.org/jira/browse/HIVE-25164 > Project: Hive > Issue Type: Improvement >Reporter: Pravin Sinha >Assignee: Pravin Sinha >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-25164) Execute Bootstrap REPL load DDL tasks in parallel
[ https://issues.apache.org/jira/browse/HIVE-25164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha updated HIVE-25164: Resolution: Fixed Status: Resolved (was: Patch Available) > Execute Bootstrap REPL load DDL tasks in parallel > - > > Key: HIVE-25164 > URL: https://issues.apache.org/jira/browse/HIVE-25164 > Project: Hive > Issue Type: Improvement >Reporter: Pravin Sinha >Assignee: Pravin Sinha >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-25272) READ transactions are getting logged in NOTIFICATION LOG
[ https://issues.apache.org/jira/browse/HIVE-25272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha updated HIVE-25272: Resolution: Fixed Status: Resolved (was: Patch Available) Committed to master. Thanks for the review, [~aasha] , [~pvary] and [~dkuzmenko] !! > READ transactions are getting logged in NOTIFICATION LOG > > > Key: HIVE-25272 > URL: https://issues.apache.org/jira/browse/HIVE-25272 > Project: Hive > Issue Type: Bug >Reporter: Pravin Sinha >Assignee: Pravin Sinha >Priority: Major > Labels: pull-request-available > Time Spent: 3h 10m > Remaining Estimate: 0h > > While READ transactions are already skipped from getting logged in > NOTIFICATION logs, few are still getting logged. Need to skip those > transactions as well. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-25272) READ transactions are getting logged in NOTIFICATION LOG
[ https://issues.apache.org/jira/browse/HIVE-25272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha updated HIVE-25272: Status: Patch Available (was: Open) > READ transactions are getting logged in NOTIFICATION LOG > > > Key: HIVE-25272 > URL: https://issues.apache.org/jira/browse/HIVE-25272 > Project: Hive > Issue Type: Bug >Reporter: Pravin Sinha >Assignee: Pravin Sinha >Priority: Major > > While READ transactions are already skipped from getting logged in > NOTIFICATION logs, few are still getting logged. Need to skip those > transactions as well. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-25272) READ transactions are getting logged in NOTIFICATION LOG
[ https://issues.apache.org/jira/browse/HIVE-25272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha reassigned HIVE-25272: --- > READ transactions are getting logged in NOTIFICATION LOG > > > Key: HIVE-25272 > URL: https://issues.apache.org/jira/browse/HIVE-25272 > Project: Hive > Issue Type: Bug >Reporter: Pravin Sinha >Assignee: Pravin Sinha >Priority: Major > > While READ transactions are already skipped from getting logged in > NOTIFICATION logs, few are still getting logged. Need to skip those > transactions as well. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HIVE-25154) Disable StatsUpdaterThread and PartitionManagementTask for db that is being failoved over.
[ https://issues.apache.org/jira/browse/HIVE-25154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-25154. - Resolution: Fixed Committed to master. Thanks for the patch, [~haymant] !!! > Disable StatsUpdaterThread and PartitionManagementTask for db that is being > failoved over. > -- > > Key: HIVE-25154 > URL: https://issues.apache.org/jira/browse/HIVE-25154 > Project: Hive > Issue Type: Improvement >Reporter: Haymant Mangla >Assignee: Haymant Mangla >Priority: Major > Labels: pull-request-available > Attachments: HIVE-25154.patch > > Time Spent: 5h 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-25154) Disable StatsUpdaterThread and PartitionManagementTask for db that is being failoved over.
[ https://issues.apache.org/jira/browse/HIVE-25154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17359077#comment-17359077 ] Pravin Sinha commented on HIVE-25154: - +1 > Disable StatsUpdaterThread and PartitionManagementTask for db that is being > failoved over. > -- > > Key: HIVE-25154 > URL: https://issues.apache.org/jira/browse/HIVE-25154 > Project: Hive > Issue Type: Improvement >Reporter: Haymant Mangla >Assignee: Haymant Mangla >Priority: Major > Labels: pull-request-available > Attachments: HIVE-25154.patch > > Time Spent: 4h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-25164) Execute Bootstrap REPL load DDL tasks in parallel
[ https://issues.apache.org/jira/browse/HIVE-25164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha updated HIVE-25164: Labels: (was: pull-request-available) Status: Patch Available (was: Open) > Execute Bootstrap REPL load DDL tasks in parallel > - > > Key: HIVE-25164 > URL: https://issues.apache.org/jira/browse/HIVE-25164 > Project: Hive > Issue Type: Improvement >Reporter: Pravin Sinha >Assignee: Pravin Sinha >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-25164) Execute Bootstrap REPL load DDL tasks in parallel
[ https://issues.apache.org/jira/browse/HIVE-25164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha reassigned HIVE-25164: --- > Execute Bootstrap REPL load DDL tasks in parallel > - > > Key: HIVE-25164 > URL: https://issues.apache.org/jira/browse/HIVE-25164 > Project: Hive > Issue Type: Improvement >Reporter: Pravin Sinha >Assignee: Pravin Sinha >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HIVE-24956) Add debug logs for time taken in the incremental event processing
[ https://issues.apache.org/jira/browse/HIVE-24956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-24956. - Resolution: Fixed Committed to master. Thanks for the patch, [~^sharma] !! > Add debug logs for time taken in the incremental event processing > - > > Key: HIVE-24956 > URL: https://issues.apache.org/jira/browse/HIVE-24956 > Project: Hive > Issue Type: Bug >Reporter: Arko Sharma >Assignee: Arko Sharma >Priority: Major > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-24956) Add debug logs for time taken in the incremental event processing
[ https://issues.apache.org/jira/browse/HIVE-24956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17351588#comment-17351588 ] Pravin Sinha commented on HIVE-24956: - +1 > Add debug logs for time taken in the incremental event processing > - > > Key: HIVE-24956 > URL: https://issues.apache.org/jira/browse/HIVE-24956 > Project: Hive > Issue Type: Bug >Reporter: Arko Sharma >Assignee: Arko Sharma >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-24909) Skip the repl events from getting logged in notification log
[ https://issues.apache.org/jira/browse/HIVE-24909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17347747#comment-17347747 ] Pravin Sinha commented on HIVE-24909: - Committed to master. Thanks for the patch, [~haymant] > Skip the repl events from getting logged in notification log > > > Key: HIVE-24909 > URL: https://issues.apache.org/jira/browse/HIVE-24909 > Project: Hive > Issue Type: Bug >Reporter: Haymant Mangla >Assignee: Haymant Mangla >Priority: Major > Labels: pull-request-available > Time Spent: 7h 10m > Remaining Estimate: 0h > > Currently REPL dump events are logged and replicated as a part of replication > policy. Whenever one replication cycle completed, we always have one > transaction left open on the target corresponding to repl dump operation. > This will never be caught up without manually dealing with the transaction on > target cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HIVE-24909) Skip the repl events from getting logged in notification log
[ https://issues.apache.org/jira/browse/HIVE-24909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17347745#comment-17347745 ] Pravin Sinha edited comment on HIVE-24909 at 5/19/21, 3:48 PM: --- +1 Committed to master. Thanks for the patch, [~haymant] was (Author: pkumarsinha): Thanks for the patch, [~haymant] > Skip the repl events from getting logged in notification log > > > Key: HIVE-24909 > URL: https://issues.apache.org/jira/browse/HIVE-24909 > Project: Hive > Issue Type: Bug >Reporter: Haymant Mangla >Assignee: Haymant Mangla >Priority: Major > Labels: pull-request-available > Time Spent: 7h 10m > Remaining Estimate: 0h > > Currently REPL dump events are logged and replicated as a part of replication > policy. Whenever one replication cycle completed, we always have one > transaction left open on the target corresponding to repl dump operation. > This will never be caught up without manually dealing with the transaction on > target cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HIVE-24909) Skip the repl events from getting logged in notification log
[ https://issues.apache.org/jira/browse/HIVE-24909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17347745#comment-17347745 ] Pravin Sinha edited comment on HIVE-24909 at 5/19/21, 3:48 PM: --- +1 was (Author: pkumarsinha): +1 Committed to master. Thanks for the patch, [~haymant] > Skip the repl events from getting logged in notification log > > > Key: HIVE-24909 > URL: https://issues.apache.org/jira/browse/HIVE-24909 > Project: Hive > Issue Type: Bug >Reporter: Haymant Mangla >Assignee: Haymant Mangla >Priority: Major > Labels: pull-request-available > Time Spent: 7h 10m > Remaining Estimate: 0h > > Currently REPL dump events are logged and replicated as a part of replication > policy. Whenever one replication cycle completed, we always have one > transaction left open on the target corresponding to repl dump operation. > This will never be caught up without manually dealing with the transaction on > target cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HIVE-24909) Skip the repl events from getting logged in notification log
[ https://issues.apache.org/jira/browse/HIVE-24909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-24909. - Resolution: Fixed Thanks for the patch, [~haymant] > Skip the repl events from getting logged in notification log > > > Key: HIVE-24909 > URL: https://issues.apache.org/jira/browse/HIVE-24909 > Project: Hive > Issue Type: Bug >Reporter: Haymant Mangla >Assignee: Haymant Mangla >Priority: Major > Labels: pull-request-available > Time Spent: 7h 10m > Remaining Estimate: 0h > > Currently REPL dump events are logged and replicated as a part of replication > policy. Whenever one replication cycle completed, we always have one > transaction left open on the target corresponding to repl dump operation. > This will never be caught up without manually dealing with the transaction on > target cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HIVE-24912) Support to add repl.target.for property during incremental run
[ https://issues.apache.org/jira/browse/HIVE-24912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-24912. - Resolution: Fixed Committed to master. Thanks for the patch [~haymant] !! > Support to add repl.target.for property during incremental run > -- > > Key: HIVE-24912 > URL: https://issues.apache.org/jira/browse/HIVE-24912 > Project: Hive > Issue Type: Bug >Reporter: Haymant Mangla >Assignee: Haymant Mangla >Priority: Major > Labels: pull-request-available > Time Spent: 2h 20m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-24912) Support to add repl.target.for property during incremental run
[ https://issues.apache.org/jira/browse/HIVE-24912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17316496#comment-17316496 ] Pravin Sinha commented on HIVE-24912: - +1 > Support to add repl.target.for property during incremental run > -- > > Key: HIVE-24912 > URL: https://issues.apache.org/jira/browse/HIVE-24912 > Project: Hive > Issue Type: Bug >Reporter: Haymant Mangla >Assignee: Haymant Mangla >Priority: Major > Labels: pull-request-available > Time Spent: 2h 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24946) Handle failover case during Repl Load
[ https://issues.apache.org/jira/browse/HIVE-24946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha updated HIVE-24946: Description: * Update metric during load to capture the readiness for failover * Remove repl.target.for property on target cluster * Prepare the dump directory to be used during failover first dump operation was: * Update metric during load to capture the readiness for failover * Handle repl.source.for and repl.target.for properties on target cluster (Enable repl.source.for and disable repl.target.for) * Prepare the dump directory to be used during failover first dump operation > Handle failover case during Repl Load > - > > Key: HIVE-24946 > URL: https://issues.apache.org/jira/browse/HIVE-24946 > Project: Hive > Issue Type: New Feature >Reporter: Haymant Mangla >Assignee: Haymant Mangla >Priority: Major > > * Update metric during load to capture the readiness for failover > * Remove repl.target.for property on target cluster > * Prepare the dump directory to be used during failover first dump operation -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HIVE-24878) ClassNotFound exception for function replication.
[ https://issues.apache.org/jira/browse/HIVE-24878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-24878. - Resolution: Fixed Committed to master !!! Thanks for the patch, [~^sharma] > ClassNotFound exception for function replication. > - > > Key: HIVE-24878 > URL: https://issues.apache.org/jira/browse/HIVE-24878 > Project: Hive > Issue Type: Bug >Reporter: Arko Sharma >Assignee: Arko Sharma >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > > Jar is copied to path /dbName/funcName/nanoTS/jarName/jarName > Correct path should be /dbName/funcName/nanoTS/jarName > Output of hdfs dfs -find on sample function root. > /user/hive/repl/functions/bn9/udf6/28312880814467788/testudf6.jar > /user/hive/repl/functions/bn9/udf6/28312880814467788/testudf6.jar/testudf6.jar -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-24878) ClassNotFound exception for function replication.
[ https://issues.apache.org/jira/browse/HIVE-24878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17305957#comment-17305957 ] Pravin Sinha commented on HIVE-24878: - +1 LGTM > ClassNotFound exception for function replication. > - > > Key: HIVE-24878 > URL: https://issues.apache.org/jira/browse/HIVE-24878 > Project: Hive > Issue Type: Bug >Reporter: Arko Sharma >Assignee: Arko Sharma >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > Jar is copied to path /dbName/funcName/nanoTS/jarName/jarName > Correct path should be /dbName/funcName/nanoTS/jarName > Output of hdfs dfs -find on sample function root. > /user/hive/repl/functions/bn9/udf6/28312880814467788/testudf6.jar > /user/hive/repl/functions/bn9/udf6/28312880814467788/testudf6.jar/testudf6.jar -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24896) External table having same name as dropped managed table fails to replicate
[ https://issues.apache.org/jira/browse/HIVE-24896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha updated HIVE-24896: Resolution: Fixed Status: Resolved (was: Patch Available) Committed to master !! Thanks for the review, [~aasha] > External table having same name as dropped managed table fails to replicate > --- > > Key: HIVE-24896 > URL: https://issues.apache.org/jira/browse/HIVE-24896 > Project: Hive > Issue Type: Bug >Reporter: Pravin Sinha >Assignee: Pravin Sinha >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HIVE-24718) Moving to file based iteration for copying data
[ https://issues.apache.org/jira/browse/HIVE-24718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-24718. - Resolution: Fixed Committed to master Thank you for the patch, [~^sharma] > Moving to file based iteration for copying data > --- > > Key: HIVE-24718 > URL: https://issues.apache.org/jira/browse/HIVE-24718 > Project: Hive > Issue Type: Bug >Reporter: Arko Sharma >Assignee: Arko Sharma >Priority: Major > Labels: pull-request-available > Attachments: HIVE-24718.01.patch, HIVE-24718.02.patch, > HIVE-24718.04.patch, HIVE-24718.05.patch, HIVE-24718.06.patch > > Time Spent: 6h 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HIVE-24842) SHOW CREATE TABLE on a VIEW with partition returns wrong sql.
[ https://issues.apache.org/jira/browse/HIVE-24842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-24842. - Resolution: Fixed Committed to master. Thank you for the patch, [~anuragshekhar] > SHOW CREATE TABLE on a VIEW with partition returns wrong sql. > -- > > Key: HIVE-24842 > URL: https://issues.apache.org/jira/browse/HIVE-24842 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Anurag Shekhar >Assignee: Anurag Shekhar >Priority: Minor > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > Steps to reproduce > Create a view with partitions. > execute "Show create table " on above view. > The sql returned will not have partitioned on clause in it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-24842) SHOW CREATE TABLE on a VIEW with partition returns wrong sql.
[ https://issues.apache.org/jira/browse/HIVE-24842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17303872#comment-17303872 ] Pravin Sinha commented on HIVE-24842: - +1 > SHOW CREATE TABLE on a VIEW with partition returns wrong sql. > -- > > Key: HIVE-24842 > URL: https://issues.apache.org/jira/browse/HIVE-24842 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Reporter: Anurag Shekhar >Assignee: Anurag Shekhar >Priority: Minor > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Steps to reproduce > Create a view with partitions. > execute "Show create table " on above view. > The sql returned will not have partitioned on clause in it. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24896) External table having same name as dropped managed table fails to replicate
[ https://issues.apache.org/jira/browse/HIVE-24896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha updated HIVE-24896: Status: Patch Available (was: Open) > External table having same name as dropped managed table fails to replicate > --- > > Key: HIVE-24896 > URL: https://issues.apache.org/jira/browse/HIVE-24896 > Project: Hive > Issue Type: Bug >Reporter: Pravin Sinha >Assignee: Pravin Sinha >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-24896) External table having same name as dropped managed table fails to replicate
[ https://issues.apache.org/jira/browse/HIVE-24896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha reassigned HIVE-24896: --- > External table having same name as dropped managed table fails to replicate > --- > > Key: HIVE-24896 > URL: https://issues.apache.org/jira/browse/HIVE-24896 > Project: Hive > Issue Type: Bug >Reporter: Pravin Sinha >Assignee: Pravin Sinha >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-24718) Moving to file based iteration for copying data
[ https://issues.apache.org/jira/browse/HIVE-24718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17302080#comment-17302080 ] Pravin Sinha commented on HIVE-24718: - +1 > Moving to file based iteration for copying data > --- > > Key: HIVE-24718 > URL: https://issues.apache.org/jira/browse/HIVE-24718 > Project: Hive > Issue Type: Bug >Reporter: Arko Sharma >Assignee: Arko Sharma >Priority: Major > Labels: pull-request-available > Attachments: HIVE-24718.01.patch, HIVE-24718.02.patch, > HIVE-24718.04.patch, HIVE-24718.05.patch, HIVE-24718.06.patch > > Time Spent: 6.5h > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HIVE-24818) REPL LOAD of views with partitions fails
[ https://issues.apache.org/jira/browse/HIVE-24818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-24818. - Resolution: Fixed Committed to master. Thank you for the patch, [~anuragshekhar] > REPL LOAD of views with partitions fails > - > > Key: HIVE-24818 > URL: https://issues.apache.org/jira/browse/HIVE-24818 > Project: Hive > Issue Type: Bug > Components: repl >Reporter: Anurag Shekhar >Assignee: Anurag Shekhar >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-24818) REPL LOAD of views with partitions fails
[ https://issues.apache.org/jira/browse/HIVE-24818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17301382#comment-17301382 ] Pravin Sinha commented on HIVE-24818: - +1 > REPL LOAD of views with partitions fails > - > > Key: HIVE-24818 > URL: https://issues.apache.org/jira/browse/HIVE-24818 > Project: Hive > Issue Type: Bug > Components: repl >Reporter: Anurag Shekhar >Assignee: Anurag Shekhar >Priority: Major > Labels: pull-request-available > Time Spent: 40m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (HIVE-24884) Move top level dump metadata content to be in JSON format
[ https://issues.apache.org/jira/browse/HIVE-24884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha updated HIVE-24884: Status: Patch Available (was: Open) > Move top level dump metadata content to be in JSON format > - > > Key: HIVE-24884 > URL: https://issues.apache.org/jira/browse/HIVE-24884 > Project: Hive > Issue Type: Task >Reporter: Pravin Sinha >Assignee: Pravin Sinha >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > {color:#172b4d}The current content for _dumpmetadata file is TAB separated. > This is not very flexible for extension. A more flexible format like JSON > based content would be helpful for extending the content.{color} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (HIVE-24884) Move top level dump metadata content to be in JSON format
[ https://issues.apache.org/jira/browse/HIVE-24884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha reassigned HIVE-24884: --- > Move top level dump metadata content to be in JSON format > - > > Key: HIVE-24884 > URL: https://issues.apache.org/jira/browse/HIVE-24884 > Project: Hive > Issue Type: Task >Reporter: Pravin Sinha >Assignee: Pravin Sinha >Priority: Major > > {color:#172b4d}The current content for _dumpmetadata file is TAB separated. > This is not very flexible for extension. A more flexible format like JSON > based content would be helpful for extending the content.{color} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HIVE-24783) Store currentNotificationID on target during repl load operation
[ https://issues.apache.org/jira/browse/HIVE-24783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-24783. - Resolution: Fixed Committed to master. Thank you for the patch, [~haymant] > Store currentNotificationID on target during repl load operation > > > Key: HIVE-24783 > URL: https://issues.apache.org/jira/browse/HIVE-24783 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Haymant Mangla >Assignee: Haymant Mangla >Priority: Major > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HIVE-24783) Store currentNotificationID on target during repl load operation
[ https://issues.apache.org/jira/browse/HIVE-24783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17293311#comment-17293311 ] Pravin Sinha edited comment on HIVE-24783 at 3/2/21, 10:05 PM: --- +1 was (Author: pkumarsinha): +1 Pending test > Store currentNotificationID on target during repl load operation > > > Key: HIVE-24783 > URL: https://issues.apache.org/jira/browse/HIVE-24783 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Haymant Mangla >Assignee: Haymant Mangla >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Comment Edited] (HIVE-24783) Store currentNotificationID on target during repl load operation
[ https://issues.apache.org/jira/browse/HIVE-24783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17293311#comment-17293311 ] Pravin Sinha edited comment on HIVE-24783 at 3/2/21, 3:13 AM: -- +1 Pending test was (Author: pkumarsinha): +1 > Store currentNotificationID on target during repl load operation > > > Key: HIVE-24783 > URL: https://issues.apache.org/jira/browse/HIVE-24783 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Haymant Mangla >Assignee: Haymant Mangla >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-24783) Store currentNotificationID on target during repl load operation
[ https://issues.apache.org/jira/browse/HIVE-24783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17293311#comment-17293311 ] Pravin Sinha commented on HIVE-24783: - +1 > Store currentNotificationID on target during repl load operation > > > Key: HIVE-24783 > URL: https://issues.apache.org/jira/browse/HIVE-24783 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Haymant Mangla >Assignee: Haymant Mangla >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (HIVE-24750) Create a single copy task for external tables within default DB location
[ https://issues.apache.org/jira/browse/HIVE-24750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pravin Sinha resolved HIVE-24750. - Resolution: Fixed > Create a single copy task for external tables within default DB location > > > Key: HIVE-24750 > URL: https://issues.apache.org/jira/browse/HIVE-24750 > Project: Hive > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > Presently we create single task for each table, but for the tables within > default DB location, we can copy the DB location in one task. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (HIVE-24750) Create a single copy task for external tables within default DB location
[ https://issues.apache.org/jira/browse/HIVE-24750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17292555#comment-17292555 ] Pravin Sinha commented on HIVE-24750: - Committed to master. Thank you for the patch, [~ayushtkn] > Create a single copy task for external tables within default DB location > > > Key: HIVE-24750 > URL: https://issues.apache.org/jira/browse/HIVE-24750 > Project: Hive > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Ayush Saxena >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > Presently we create single task for each table, but for the tables within > default DB location, we can copy the DB location in one task. -- This message was sent by Atlassian Jira (v8.3.4#803005)