[ 
https://issues.apache.org/jira/browse/BEAM-12986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17426433#comment-17426433
 ] 

Chamikara Madhusanka Jayalath commented on BEAM-12986:
------------------------------------------------------

This is a known issue. Part of the problem is that currently there's no 
runner-independent way to cleanup resources when a Beam pipeline fails.

I think aworkaround would be to periodically look for tables that corresponding 
to failed jobs and delete such tables. The load job names are standardized and 
follows the pattern here: 
[https://github.com/apache/beam/blob/49a96b9ca510974cc81231b22ab05a7ae4485888/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/BigQueryResourceNaming.java#L36]

Here "JOB_ID" should map to the ID of the failed job (for supported runners).

 

 

> WriteTables leaves behind temporary tables on job failure
> ---------------------------------------------------------
>
>                 Key: BEAM-12986
>                 URL: https://issues.apache.org/jira/browse/BEAM-12986
>             Project: Beam
>          Issue Type: Improvement
>          Components: extensions-java-gcp, io-java-gcp
>    Affects Versions: 2.29.0
>            Reporter: Jan
>            Priority: P2
>
> I'm running a job that writes to a BigQuery table using 
> `BigQueryIO.writeTableRows().to(
> new SerializableFunction<ValueInSingleWindow<TableRow>, TableDestination>)`.
>  
> I'm noticing that when my job fails, it leaves behind temporary tables 
> (`beam_bq_job_LOAD_*`) in the destination dataset. These tables are created 
> by load jobs started here:
>  
> [https://github.com/apache/beam/blob/master/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/WriteTables.java#L273-L284)|https://github.com/apache/beam/blob/master/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/WriteTables.java#L273-L284),]
>  
> I'd like to specify a temporary dataset for these load job result tables, but 
> I don't see a way to specify one using the Java SDK. It seems like the load 
> job destination is inferred by changing the table id of the final destination:
>  
> [https://github.com/apache/beam/blob/master/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/WriteTables.java#L255]
>  
> which makes me think that the configuration I want to set doesn't exist. Is 
> there a workaround to avoid having these tables be left around when the job 
> fails? Could the option be added?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to