[
https://issues.apache.org/jira/browse/HIVE-26932?focusedWorklogId=839515&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-839515
]
ASF GitHub Bot logged work on HIVE-26932:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 17/Jan/23 06:28
Start Date: 17/Jan/23 06:28
Worklog Time Spent: 10m
Work Description: harshal-16 opened a new pull request, #3957:
URL: https://github.com/apache/hive/pull/3957
Problem:
- If Incremental Dump operation failes while dumping any event id in
the staging directory. Then dump directory for this event id along with file
_dumpmetadata still exists in the dump location. which is getting stored in
_events_dump file
- When user triggers dump operation for this policy again, It again
resumes dumping from failed event id, and tries to dump it again but as that
event id directory already created in previous cycle, it fails with the
exception
Solution:
- Fixed cleanFailedEventDirIfExists to remove folder for failed
event id for a selected database
Issue Time Tracking
-------------------
Worklog Id: (was: 839515)
Remaining Estimate: 0h
Time Spent: 10m
> Correct stage name value in replication_metrics.progress column in
> replication_metrics table
> --------------------------------------------------------------------------------------------
>
> Key: HIVE-26932
> URL: https://issues.apache.org/jira/browse/HIVE-26932
> Project: Hive
> Issue Type: Improvement
> Reporter: Harshal Patel
> Assignee: Harshal Patel
> Priority: Major
> Time Spent: 10m
> Remaining Estimate: 0h
>
> To improve diagnostic capability from Source to backup replication, update
> replication_metrics table by adding pre_optimized_bootstrap in progress bar
> in case of optimized bootstrap first cycle.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)