BjornPrime commented on code in PR #25965:
URL: https://github.com/apache/beam/pull/25965#discussion_r1242715847
##########
sdks/python/apache_beam/runners/portability/sdk_container_builder.py:
##########
@@ -307,16 +306,16 @@ def _invoke_docker_build_and_push(self,
container_image_name):
"Python SDK container built and pushed as %s." % container_image_name)
def _upload_to_gcs(self, local_file_path, gcs_location):
- gcs_bucket, gcs_object = self._get_gcs_bucket_and_name(gcs_location)
- request = storage.StorageObjectsInsertRequest(
- bucket=gcs_bucket, name=gcs_object)
+ bucket_name, blob_name = self._get_gcs_bucket_and_name(gcs_location)
_LOGGER.info('Starting GCS upload to %s...', gcs_location)
- total_size = os.path.getsize(local_file_path)
from apitools.base.py import exceptions
+ from google.cloud import storage
Review Comment:
Maybe? Most of the FileSystems methods I'm looking at return a file stream,
so it would look more like the previous implementation, though
upload_from_filename does a very similar thing one level up. I guess the
advantage would be the filesystem hopefully changes less often than the
underlying IO, so maybe we wouldn't need to change it as often. I don't think
this file is already using filesystems though, so it's not clear to me we gain
much from doing that.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]