1. Why hasn't the GCS client download been updated since early June?
Numerous changes have been checked in since then. As I understand it one
of them is crucial to GCS reliability.
2. Why hasn't the GCS client been integrated into the App Engine SDK /
Runtime. This seems like the best way to ensure developers are using the
latest-approved version.
3. These 'hangs' are reminiscent of the unreliability of the deprecated
files API... my hope remains that GCS can become MUCH more reliable in the
very near future!
4. On a side note... the python SDK installation can take A LONG time...
I believe this is largely due to more and more Django versions getting
added to the SDK. Without Django versions v0.96 thru v1.5 the uncompressed
SDK size drops from 138MB to 24MB (from 19k+ files to just 1.9k files).
I'm a Django fan but I'm not using it at the moment with App Engine... it
would be nice if the Django installation was optional and/or let you select
the versions of Django to install... I doubt any developer needs all
versions from 0.96 to 1.5. Just a minor annoyance really... please fix the
GCS issues asap!
On Tuesday, September 10, 2013 6:14:58 AM UTC-4, Ben Smithers wrote:
>
> Hi,
>
> I've been having some problems writing data to google cloud storage using
> the python client library (
> https://developers.google.com/appengine/docs/python/googlecloudstorageclient/
> ).
>
> Frequently, the write to cloudstorage will 'hang' indefinitely. The call
> to open a new file is successful, but the write (1MB in size in this test
> case) never returns and the file never appears in the bucket. For example,
> I launch 10 instances of a (backend) module, each of which attempts to
> write a file. Typically, somewhere between 4 and 9 of these will succeed,
> with the others hanging after opening. This is the code I am running:
>
> class StartHandler(webapp2.RequestHandler):
>
> GCS_BUCKET="/"
>
> def debugMessage(self, msg):
> logging.debug(msg)
> logservice.flush()
>
> def get(self):
> suffix = str(backends.get_instance())
> filename=self.GCS_BUCKET + "/testwrite" + suffix + ".txt"
> gcs_file = cloudstorage.open(filename, 'w', content_type=
> 'text/plain' )
> self.debugMessage("opened file")
> gcs_file.write("f" * 1024 * 1024 * 1 + '\n')
> self.debugMessage("data written")
> gcs_file.close()
> self.debugMessage("file closed")
>
>
> I have also attached a tarred example of the full application in case it
> is relevant (to run, you should only need to modify the application name in
> the two .yaml files and the bucket name in storagetest.py). A few
> additional things:
>
> 1.) I wondered if this was a problem with simultaneous writes so I had
> each instance sleep for 30 seconds * its instance number; I observe the
> same behaviour.
> 2.) I have seen this behaviour on frontend instances, but far far more
> rarely. I modified the above to run in response to a user request - once
> out of 60 times the write hung after opening (a Deadline Exceeded Exception
> was then thrown).
> 3.) I have experimented with the RetryParams (though according to the
> documentation, the defaults should be sufficient) but to no avail. I also
> find it hard to believe this is the issue - I would assume I would be
> getting a TimeoutError.
>
> Has anyone else observed this behaviour? Does anyone have any suggestions
> for what I am doing wrong? Or a different approach to try?
>
> Very grateful for any help,
> Ben
>
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/google-appengine.
For more options, visit https://groups.google.com/groups/opt_out.