[
https://issues.apache.org/jira/browse/BEAM-3990?focusedWorklogId=87311&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-87311
]
ASF GitHub Bot logged work on BEAM-3990:
----------------------------------------
Author: ASF GitHub Bot
Created on: 03/Apr/18 22:21
Start Date: 03/Apr/18 22:21
Worklog Time Spent: 10m
Work Description: chamikaramj closed pull request #5000: [BEAM-3990]
Revert "[BEAM-2264] Credentials were not being reused between GCS calls"
URL: https://github.com/apache/beam/pull/5000
This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:
As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):
diff --git a/sdks/python/apache_beam/io/gcp/gcsio.py
b/sdks/python/apache_beam/io/gcp/gcsio.py
index c7986cdb672..f687686fd64 100644
--- a/sdks/python/apache_beam/io/gcp/gcsio.py
+++ b/sdks/python/apache_beam/io/gcp/gcsio.py
@@ -146,8 +146,6 @@ class GcsIOError(IOError, retry.PermanentException):
class GcsIO(object):
"""Google Cloud Storage I/O client."""
- local_state = threading.local()
-
def __new__(cls, storage_client=None):
if storage_client:
# This path is only used for testing.
@@ -157,7 +155,7 @@ def __new__(cls, storage_client=None):
# creating more than one storage client for each thread, since each
# initialization requires the relatively expensive step of initializing
# credentaials.
- local_state = GcsIO.local_state
+ local_state = threading.local()
if getattr(local_state, 'gcsio_instance', None) is None:
credentials = auth.get_service_credentials()
storage_client = storage.StorageV1(
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 87311)
Time Spent: 50m (was: 40m)
> Dataflow jobs fail with "KeyError: 'location'" when uploading to GCS
> --------------------------------------------------------------------
>
> Key: BEAM-3990
> URL: https://issues.apache.org/jira/browse/BEAM-3990
> Project: Beam
> Issue Type: Bug
> Components: sdk-py-core
> Reporter: Chamikara Jayalath
> Assignee: Charles Chen
> Priority: Major
> Time Spent: 50m
> Remaining Estimate: 0h
>
> Some Dataflow jobs are failing due to following error (in worker logs).
>
> Error in _start_upload while inserting file
> gs://cloud-ml-benchmark-output-us-central/df1-cloudml-benchmark-criteo-small-python-033010274088282-presubmit3/033010274088282/temp/df1-cloudml-benchmark-criteo-small-python-033010274088282-presubmit3.1522430898.446147/dax-tmp-2018-03-30_10_28_40-14595186994726940229-S241-1-dc87ef69274882bf/tmp-dc87ef6927488c5a-shard--try-308ae8b3268d12b2-endshard.avro:
> Traceback (most recent call last): File
> "/usr/local/lib/python2.7/dist-packages/apache_beam/io/gcp/gcsio.py", line
> 559, in _start_upload self._client.objects.Insert(self._insert_request,
> upload=self._upload) File
> "/usr/local/lib/python2.7/dist-packages/apache_beam/io/gcp/internal/clients/storage/storage_v1_client.py",
> line 971, in Insert download=download) File
> "/usr/local/lib/python2.7/dist-packages/apitools/base/py/base_api.py", line
> 706, in _RunMethod http_request, client=self.client) File
> "/usr/local/lib/python2.7/dist-packages/apitools/base/py/transfer.py", line
> 860, in InitializeUpload url =
> [http_response.info|https://www.google.com/url?q=http://http_response.info&sa=D&usg=AFQjCNGvYHYJBb_G4YNo3VvGoqX2Gq-6Yw]['location']
> KeyError: 'location'
>
> This seems to be due to [https://github.com/apache/beam/pull/4891.] Possibly
> storage.StorageV1() cannot be shared across multiple requests without
> additional fixes.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)