[
https://issues.apache.org/jira/browse/GOBBLIN-2006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Arpit Varshney updated GOBBLIN-2006:
------------------------------------
Description:
Currently, while cleaning the log files, the Retention job goes into OOM and
fails when the no of log files is too many.
Retention job while fetching all the dataset versions, loads the file status
all at once into the memory, resulting in this issue.
Thus, the Retention job should avoid loading all data into memory, and use an
iterator-based approach. This will load only limited file status into memory
and making the retention job pipeline more robust to OOM errors
was:
Currently, while cleaning the log files, the Retention job goes into OOM and
fails when the no of log files is too many.
Retention while fetching all the dataset versions, loads the file status all at
once into the memory, resulting in this issue.
Thus, the Retention job should avoid loading all data into memory, and use an
iterator-based approach. This will load only limited file status into memory
and making the retention job pipeline more robust.
> Retention Job should be more robust to OOM failures
> ---------------------------------------------------
>
> Key: GOBBLIN-2006
> URL: https://issues.apache.org/jira/browse/GOBBLIN-2006
> Project: Apache Gobblin
> Issue Type: Bug
> Components: misc
> Reporter: Arpit Varshney
> Priority: Major
>
> Currently, while cleaning the log files, the Retention job goes into OOM and
> fails when the no of log files is too many.
> Retention job while fetching all the dataset versions, loads the file status
> all at once into the memory, resulting in this issue.
> Thus, the Retention job should avoid loading all data into memory, and use an
> iterator-based approach. This will load only limited file status into memory
> and making the retention job pipeline more robust to OOM errors
--
This message was sent by Atlassian Jira
(v8.20.10#820010)