[
https://issues.apache.org/jira/browse/BEAM-5040?focusedWorklogId=128872&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-128872
]
ASF GitHub Bot logged work on BEAM-5040:
----------------------------------------
Author: ASF GitHub Bot
Created on: 30/Jul/18 18:45
Start Date: 30/Jul/18 18:45
Worklog Time Spent: 10m
Work Description: reuvenlax commented on a change in pull request #6080:
[BEAM-5040] Fix retry bug for BigQuery jobs.
URL: https://github.com/apache/beam/pull/6080#discussion_r206280622
##########
File path:
sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/BigQueryIO.java
##########
@@ -1689,6 +1689,11 @@ public WriteResult expand(PCollection<T> input) {
if (getMaxFileSize() != null) {
batchLoads.setMaxFileSize(getMaxFileSize());
}
+ // When running in streaming (unbounded mode) we want to retry failed
load jobs
+ // indefinitely. Failing the bundle is expensive, so we set a fairly
high limit on retries.
+ if (IsBounded.UNBOUNDED.equals(input.isBounded())) {
+ batchLoads.setMaxRetryJobs(1000);
Review comment:
Failed jobs will retry forever regardless of this setting, and streaming
jobs retry forever. Without this the entire bundle is retried (because runners
retry bundles not elements), and potentially unrelated loads are retried which
is even worse for daily job quota.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 128872)
Time Spent: 1h (was: 50m)
> BigQueryIO retries infinitely in WriteTable and WriteRename
> -----------------------------------------------------------
>
> Key: BEAM-5040
> URL: https://issues.apache.org/jira/browse/BEAM-5040
> Project: Beam
> Issue Type: Bug
> Components: io-java-gcp
> Affects Versions: 2.5.0
> Reporter: Reuven Lax
> Assignee: Reuven Lax
> Priority: Major
> Time Spent: 1h
> Remaining Estimate: 0h
>
> BigQueryIO retries infinitely in WriteTable and WriteRename
> Several failure scenarios with the current code:
> # It's possible for a load job to return failure even though it actually
> succeeded (e.g. the reply might have timed out). In this case, BigQueryIO
> will retry the job which will fail again (because the job id has already been
> used), leading to indefinite retries. Correct behavior is to stop retrying as
> the load job has succeeded.
> # It's possible for a load job to be accepted by BigQuery, but then to fail
> on the BigQuery side. In this case a retry with the same job id will fail as
> that job id has already been used. BigQueryIO will sometimes detect this, but
> if the worker has restarted it will instead issue a load with the old job id
> and go into a retry loop. Correct behavior is to generate a new deterministic
> job id and retry using that new job id.
> # In many cases of worker restart, BigQueryIO ends up in infinite retry
> loops.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)