[
https://issues.apache.org/jira/browse/BEAM-7742?focusedWorklogId=298872&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-298872
]
ASF GitHub Bot logged work on BEAM-7742:
----------------------------------------
Author: ASF GitHub Bot
Created on: 21/Aug/19 17:15
Start Date: 21/Aug/19 17:15
Worklog Time Spent: 10m
Work Description: pabloem commented on pull request #9242: [BEAM-7742]
Partition files in BQFL to cater to quotas & limits
URL: https://github.com/apache/beam/pull/9242#discussion_r316293709
##########
File path: sdks/python/apache_beam/io/gcp/bigquery_file_loads.py
##########
@@ -632,75 +693,57 @@ def _write_files(self, destination_data_kv_pc,
file_prefix_pcv):
accumulation_mode=trigger.AccumulationMode.DISCARDING))
return all_destination_file_pairs_pc
- def expand(self, pcoll):
- p = pcoll.pipeline
-
- temp_location = p.options.view_as(GoogleCloudOptions).temp_location
-
- load_job_name_pcv = pvalue.AsSingleton(
- p
- | "ImpulseJobName" >> beam.Create([None])
- | beam.Map(lambda _: _generate_load_job_name()))
-
- file_prefix_pcv = pvalue.AsSingleton(
- p
- | "CreateFilePrefixView" >> beam.Create([''])
- | "GenerateFilePrefix" >> beam.Map(
- file_prefix_generator(self._validate,
- self._custom_gcs_temp_location,
- temp_location)))
-
- destination_data_kv_pc = (
- pcoll
- | "RewindowIntoGlobal" >> self._window_fn()
- | "AppendDestination" >>
beam.ParDo(bigquery_tools.AppendDestinationsFn(
- self.destination), *self.table_side_inputs))
-
- all_destination_file_pairs_pc = self._write_files(destination_data_kv_pc,
- file_prefix_pcv)
-
- grouped_files_pc = (
- all_destination_file_pairs_pc
- | "GroupFilesByTableDestinations" >> beam.GroupByKey())
-
- # Load Jobs are triggered to temporary tables, and those are later copied
to
- # the actual appropriate destination query. This ensures atomicity when
only
- # some of the load jobs would fail but not other.
- # If any of them fails, then copy jobs are not triggered.
+ def _load_data(self, partitions_using_temp_tables,
+ partitions_direct_to_destination, load_job_name_pcv,
+ singleton_pc):
+ """Load data to BigQuery
+
+ Data is loaded into BigQuery in the following two ways:
+ 1. Single partition per destination:
Review comment:
I love this explicit pydoc with the behavior. Could you:
- This behavior only occurs with a single partition. So I would change the
title to be `1. Single partition`, and `2. Multiple partitions`. If there are
multiple destinations with a single partition each, we have to treat this as
case #2, right? (as we discussed).
- In section 1, instead of `When there is a single partition of files
destined to a single destination, a single load job is triggered for each
partition.`, maybe just write `When there is a single partition of files
destined to a single destination, a single load job is triggered.`
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 298872)
Time Spent: 3h (was: 2h 50m)
> BigQuery File Loads to work well with load job size limits
> ----------------------------------------------------------
>
> Key: BEAM-7742
> URL: https://issues.apache.org/jira/browse/BEAM-7742
> Project: Beam
> Issue Type: Improvement
> Components: io-py-gcp
> Reporter: Pablo Estrada
> Assignee: Tanay Tummalapalli
> Priority: Major
> Time Spent: 3h
> Remaining Estimate: 0h
>
> Load jobs into BigQuery have a number of limitations:
> [https://cloud.google.com/bigquery/quotas#load_jobs]
>
> Currently, the python BQ sink implemented in `bigquery_file_loads.py` does
> not handle these limitations well. Improvements need to be made to the
> miplementation, to:
> * Decide to use temp_tables dynamically at pipeline execution
> * Add code to determine when a load job to a single destination needs to be
> partitioned into multiple jobs.
> * When this happens, then we definitely need to use temp_tables, in case one
> of the two load jobs fails, and the pipeline is rerun.
> Tanay, would you be able to look at this?
--
This message was sent by Atlassian Jira
(v8.3.2#803003)