pabloem commented on a change in pull request #12485: URL: https://github.com/apache/beam/pull/12485#discussion_r468825677
########## File path: sdks/python/apache_beam/io/gcp/bigquery.py ########## @@ -304,6 +308,8 @@ def compute_table_name(row): NOTE: This job name template does not have backwards compatibility guarantees. """ BQ_JOB_NAME_TEMPLATE = "beam_bq_job_{job_type}_{job_id}_{step_id}{random}" +"""The number of shards per destination when writing via streaming inserts.""" +DEFAULT_SHARDS_PER_DESTINATION = 500 Review comment: we've been able to reach ~1k EPS per worker in Python. If we have 50 shards, we'll only reach ~50k maximum. I'd like to have a larger default, so we don't automatically cap EPS at a very low rate. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org