Thank you for adding the download feature. I develop on two machines,
Linux and Windows, and syncing from within the SDK will be very
convenient. It makes no difference for me, but I also support the
suggestion to make it optional.
I have noticed a slight problem.
./appcfg.py download_app --app
I don't understand the discussion when Google already indicated this
will be configurable.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this grou
I'm getting a fair amount of these.
A serious problem was encountered with the process that handled this
request, causing it to exit. This is likely to cause a new process to
be used for the next request to your application. If you see this
message frequently, you should contact the App Engine tea
I'd like to automatically reset memcache immediately when I've
uploaded a new version. Is this possible? I currently have a /reset
script which I run manually every time. It's a bit tedious and I'd
like to automate it, especially since there can be a delay of up to a
minute between a deployment suc
Sorry, I didn't mention I use Python. I suppose using versioning with
memcache does work, but ideally I'd like Google to add a cron token
like 'once' which does just that. If I'm not mistaken cron is only
parsed and reloaded on deployment.
--
You received this message because you are subscribed t
I've noticed static files like favicon or external stylesheets are
reported with 0 byte size in the log.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe
It's with status code 200 too. Are they maybe included in the KB of
the main page which references them?
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe
Well the problem with a script is the delay between deployment and the
updated application being available on the web. I could probably use a
safety margin of one or two minutes. Maybe I should give a bit of
background why I need this, and maybe there is a different solution to
this.
I have a few
Sorry, I didn't mention that I also do a simple string replace where I
insert time() and also two or three other string replacements based on
user agent, so purely static files don't work.
You're right, the performance difference is negligible, and reading
from file might even be faster if I didn'
I'm having the same problem. Access to the dashboard fails
sporadically.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to
I tried skipping memcache and opening files directly every time, and
while cpu_ms is equally low, response time is occasionally
dramatically higher, especially when a new instance is loaded. Up to
2000ms, when it's 50ms usually.
--
You received this message because you are subscribed to the Googl
I've been using Linux before to deploy and appcfg saved the login
credentials for a few hours, but have not figured out how to enable it
since I installed it on a new system. Does it require specific python
modules or is there a setting that can be set? Thanks.
--
You received this message becaus
Thanks, this works too. However I'm sure it was automatic before. I
just entered login and password once when prompted, and was not asked
again on further update requests for at least a few hours, using
python appcfg update /app.
--
You received this message because you are subscribed to the Goog
Success. It were indeed the cookies. It had created the file, but it
was empty. I deleted the cookie, and it worked after appcfg wrote a
new file. Thanks! I don't know though why it couldn't just overwrite
the file.
--
You received this message because you are subscribed to the Google Groups
"Go
I've only started using SQL language, or databases in general, with
Google App Engine, and have been wondering if I need to be cautious
about possible SQL Injection. I suppose this is already taken care of
by design, but just wanted to clarify. Thanks.
--
You received this message because you are
And by clarify I mean verify.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to
google-appengine+unsubscr...@googlegroups.c
Likewise. I've suddenly also been getting 500 errors, which I can't
investigate since no error is in the logs. Quite frustrating.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegr
Thank you Google.
However I've noticed what can only be a bug. According to the
dashboard Task Queue Stored Task Bytes quota is 1,099,511,627,776,000
bytes or about 1100TB. I doubt 100,000,000,000 for Task Queue Stored
Task Count is correct either.
--
You received this message because you are su
I've got following configuration and code.
backends:
- name: get
class: B1
options: dynamic
taskqueue.add(url='/task/get/',queue_name='update',target='get')
It works and the task runs and does what it's supposed to do, but it's
not in the logs. Why?
--
You received this message because you
Alright, I've figured it out: backends are versions with separate
dashboard. Might be obvious, but wasn't too me.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To uns
I'd like clarification on dynamic backends please. According to the I/
O session dynamic backends get 15 minutes startup penalty, and are
stopped after 1 minute with no requests. Are the 15 minutes inclusive
of requests which might occur during the first 15 minutes after
startup, or in addition? Do
Or is this just against some sort of Pentium = 100 reference value?
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to
googl
I very much agree with this suggestion. I've put the heavy lifting on
backends, and frontends merely serve simple requests now with only
about 10MB memory usage, but would still be charged the same amount as
instances using 128MB. Maybe Google could provide mini-instances,
which have a hard limit o
Thanks for the clarification.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to
google-appengine+unsubscr...@googlegroups.c
> Q: You seem to be trying to account for RAM in the new model. Will I be
> able to purchase Frontend Instances that use different amounts of memory?
> A: We are only planning on having one size of Frontend Instance.
What's the reason for this? It's completely understandable if Google
wants to ma
I'm getting this on backends occasionally. Is this a catastrophic
shutdown or a regular shutdown?
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from th
It seems like cron is technically not different from tasks, just
running on the internal __cron queue (without retries). Are there any
plans to make __cron tweakable and allow retries? Currently I only use
cron to schedule tasks, which is an unnecessary step, and not
completely fail-safe either.
-
http://code.google.com/appengine/docs/python/datastore/async.html
I've read the paragraph at the linked site on this, but the wording is
not clear to me. Do I understand correctly, that get_result() is not
required in transactions, because the transaction waits with the
commit until the puts compl
I'd like to switch mails to Amazon SES and register a domain with
Google Apps, to enable SPF and possibly DKIM on mails. I've read the
docs at Amazon and this seems to require fairly raw DNS records
editing. Does Google Apps allow this, or would I have to register the
domain with a registrar separa
> The SDK now supports multiple concurrent transactions.
What does this mean exactly?
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, s
Does transaction refer to database transactions? Or does "multiple
concurrent transactions" mean asynchronous operations in general? This
is not clear to me. Does it support asynchronous urlfetch now?
And, in a slightly related question, are there plans to raise the
limit of simultaneous asynchron
It seems like the service is moving in a direction in which devs will
have to adjust many knobs, rather than have Google handle it
automagically in the background.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send
You have 1,700,000 calls free per day when you enable billing.
http://code.google.com/appengine/docs/quotas.html#Mail
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
T
I've got a REST request which exceeds 2048 characters. The server can
handle it no problem, but urlfetch cannot! Is there a way to override
it? Thanks! Why did Google set a limit, I'm wondering.
http://code.google.com/p/googleappengine/source/browse/trunk/python/google/appengine/api/urlfetch_stub.
I'll have to switch to SOAP then :(
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to
google-appengine+unsubscr...@googlegr
Merci! Votre réponse a été enregistrée.
I wonder why it's French.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to
google
It's not documented, but it works! :)
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to
google-appengine+unsubscr...@google
I've got three dynamic B1 backends set up, mostly for code
organisation purposes rather than performance reasons, and have
noticed that daily costs are higher than they should be based on the
active minutes (including 15m penalty).
The third hasn't been active today so I'll ignore it in my
calcula
It seems like I misunderstood the documentation on the full hour
charge parts, since Google obviously doesn't charge strictly in 0.08$
steps. It changes 0.56$ to 0.54$ so not much difference.
Anyway I've probably found the problem, or should I say bug. After the
last request on the backend it wait
Thanks for acknowledging. When you've implemented the fix, will it be
announced in this thread or is there an issue on the bug tracker to
follow? I'll probably notice it in the dashboard but still.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" g
I've got a dynamic backend just for sending mail (billing enabled).
Mails are enqueued elsewhere, not on this backend, practically
instantly. The queue is configured like this.
- name: mail
rate: 360/m
bucket_size: 40
retry_parameters:
task_age_limit: 1d
min_backoff_seconds: 20
And
And the next run.
2011-08-15 17:02:28.509 /mail/ 200 32ms 23cpu_ms 0kb instance=0
2011-08-15 17:02:25.841 /mail/ 200 28ms 0cpu_ms 0kb instance=0
2011-08-15 17:02:23.184 /mail/ 200 38ms 0cpu_ms 0kb instance=0
2011-08-15 17:02:20.505 /mail/ 200 124ms 0cpu_ms 0kb instance=0
2011-08-15 17:01:46.807 /m
Sorry, I've figured out the delay between the first and second mail.
Both get enqueued from different functions, which can run about 30
seconds apart. Haven't considered that. The relatively low rate from
the latest run and the very low rate from the first run is an actual
problem though.
--
You
Just for good measure, the next run. I'll leave it at that. Not as
slow, but clearly much slower than it could and should.
2011-08-15 18:02:18.611 /mail/ 200 141ms 0cpu_ms 0kb instance=0
2011-08-15 18:02:15.834 /mail/ 200 24ms 0cpu_ms 0kb instance=0
2011-08-15 18:02:13.164 /mail/ 200 26ms 0cpu_ms
It's currently mostly for organisational purposes, to have separate
logs and to notice spikes in the dashboard graph better. The backend
can probably handle 360/m because as you notice from the logs it takes
much less than 100ms to send the mail out. The problem though is that
it sends out much muc
Actually I just checked the dashboard and noticed that the backend got
52712 failed start requests in about 20 hours.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To
It seems like the enforced rate situation has improved.
2011-08-17 23:02:10.404 /mail/ 200 21ms 0cpu_ms 0kb instance=0
2011-08-17 23:02:10.270 /mail/ 200 221ms 0cpu_ms 0kb instance=0
2011-08-17 23:02:09.739 /mail/ 200 24ms 0cpu_ms 0kb instance=0
2011-08-17 23:02:09.407 /mail/ 200 26ms 0cpu_ms 0kb
I've had this as well some days ago. It fixed itself the same day.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to
google
Any further insight from Googlers on this? It mostly settles at
several mails per second now, which is good enough albeit less than
configured, but there are occasionally still huge delays without any
apparent reason.
2011-08-22 01:03:57.052 /mail/ 200 92ms 0cpu_ms 0kb instance=0
2011-08-22 01:03:
It can't be a coincidence that it sends one mail every 20 seconds,
which happens to match min_backoff_seconds? It's not the first time I
noticed this either.
2011-08-23 01:03:14.844 /mail/ 200 28ms 0cpu_ms 0kb instance=0
2011-08-23 01:02:54.841 /mail/ 200 31ms 23cpu_ms 0kb instance=0
2011-08-23 01
I haven't, but maybe I should.
I reduced the task rate to 120/m to put less pressure on the task
queue (or however this might work) - to at least reliably get this
lower rate, but it thanked me by setting the enforced rate to 0.5/
s! :) So that didn't work.
I've been wondering: maybe the backend
I've noticed that some clients check feeds very frequently, so I'd
like to enable Google Cache to get less hits. Is there official
documentation for this? In particular I'm wondering how to ensure that
it doesn't serve old cached files when the file was updated. It's only
updated several times dail
I've got two MS apps which are exactly the same, except for maybe ten
lines in the code (locale differences). This isn't to stay within
quota limits, but rather because it was much easier to write like
that. I've been just using a sort of template approach for deployment
which worked very well.
Ho
I've setup the domain with Google Apps. I'll only have to figure out
some expire logic then. Haven't used this before. Basically the feed
can only be updated between :00 and :02 at every full hour.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" g
It seems to work. I'm getting HTTP 204 status reports for the URL.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to
google
Well, it seems like maybe it isn't working how it should be. I'm
noticing two different responses, which both show up as 204 in the
logs and are both cached somewhere, because the request handler
doesn't run.
Status Code:304 OK
Age:419
Date:Mon, 29 Aug 2011 19:22:09 GMT
Expires:Mon, 29 Aug 2011 20
http://code.google.com/appengine/docs/python/appidentity/overview.html
I've read the example and it's fairly simple, but I've got two
questions.
1. Which scope should I use?
2. Can I just use login: in app.yaml (of the other app) with the
account returned by get_service_account_name()?
--
You r
I haven't been able to figure out what the reason is. It seems to have
improved slightly to about 3/s but it still had 20 seconds delays
occasionally for no apparent reason. I've since moved mail sending to
a named version (rather than backend), and it works great. It can
easily send 6/s, with one
It's an about 50% increase for me, because of datastore writes.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to
google-ap
This app is still relatively low traffic, so instance hours aren't
billed here yet. However, I can already tell it will drive cost
significantly once traffic increases.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group,
When are instances supposed to shut down after being idle? I've got
two idle instance (on a separate version) which didn't get a request
for almost two hours (1:47:09 ago).
Instances
QPS*Latency*RequestsErrors Age Memory Availability
0.000 0.0 ms 14 0 1:53:2
I've figured it out: it doesn't work like that, but IMO it should.
That'd be quite nice.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group,
Do I assume correctly that this requires an SDK update? Just wondering
because it doesn't seem to have made progress in two weeks.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appengine@googleg
I've put mail sending on a separate named version, to have separate
logs and graphs. It works very well, thanks to the task target
parameter. However, mails are send in huge batches, and spin up
instances which sometimes stay idle for hours. It might help to adjust
idle instances and latency, but t
> It means you won't be charged for *idle* instances over the specified
> maximum, we'll update that to be more clear.
This should be put in bold letters.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to
I've got 4MB stored in entities including metadata, and a bit less
than 4MB in memcache. Yet the quota tab claims I'm using 120MB
currently. How does this work?
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send ema
I am getting a bit worried actually, because it increased by another
20MB even though only a few KB of data were added.
These are the Models I have, with obviously only a single index.
class HTML(db.Model):
html = db.TextProperty()
class Book(db.Model):
added = db.DateTimeProperty(auto_n
I should add that all HTML entities are updated every hour as well.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroups.com.
To unsubscribe from this group, send email to
googl
I don't delete but overwrite entities. Here is a code snippet.
class Task(webapp.RequestHandler):
def post(self):
...
for i in range(7):
for color in colors:
... # html
HTML(key_name=str(i)+color,html=html).put()
memca
I've noticed a slight mistake in the code, added when I copied and
modified it. Replace plain with color in the following line.
html = HTML.get_by_key_name(str(i)+plain)
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group
Well, Total Stored Data just reset itself to a few MB. I haven't even
deployed the new code yet, so it's not that. It seems overwriting an
entity (which I suppose does also delete it in an intermediate step)
only marks it deleted, as mentioned in a reply, and defers the actual
delete operation. And
I currently use this in a task.
for recipient in recipients:
msg.to = recipient
msg.send()
This works, but there is a problem. When send() fails so does the
task, and all mails sent so far are send again in the task retry. What
I'd like sth. like mail.batch(recipients), which guarantees t
I found out the answer to my second question.
OverQuotaError: The API call mail.Send() required more quota than is
available.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appeng...@googlegroup
I've figured out a solution to not resend mails (as posted partially
in appengine-python) but have been wondering: is there any reason to
not put recipients into BCC instead?
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this g
You could BCC the mail to a dedicated address which verifies it. If
it's not verified within a set time (which could be implemented with a
task countdown), it is send again. Of course this only verifies the
mail was sent out, and not that it was also received by the recipient.
--
You received thi
In the example below, why is this redundancy necessary? I don't
understand why it needs to be declared separately.
handlers:
- url: /(.*\.(?:css|ico|png|txt))
static_files: static/\1
upload: static/(.*\.(?:css|ico|png|txt))
--
You received this message because you are subscribed to the Googl
> static_files can contain substitution parameters from the regular expression
> in 'url', and there's no way to tell which strings could be inserted with
> those substitution parameters.
Can you post an example?
--
You received this message because you are subscribed to the Google Groups
"Goog
I've noticed mails are send with the following encoding.
Content-Type: text/plain; charset=ISO-8859-1; format=flowed; delsp=yes
Content-Transfer-Encoding: base64
To be honest I'm not very familiar with charsets, so ISO-8859-1 may be
sufficient, but I've been wondering if charset can be set?
--
There is a bug report for this already, but it's wrongly labeled as
"Component-Urlfetch" and doesn't seem to get attention.
GET /feed HTTP/1.1" 200 11203 - "Apple-PubSub/65.20,gzip(gfe)"
GET /feed HTTP/1.1" 200 2849 - "Mozilla/5.0 (Windows; U; Windows NT
6.1; en-US; rv:1.9.2.13) Gecko/20101203 Fir
I've got an hourly task which makes updates and then resets memcache
so that the new updated entries are served. I just had it happen that
the task returned 200, and all seemed fine, but memcache was not
reset. Is this a known bug? It worked hundreds of times before.
--
You received this message
I just want to add that I manually ran the same task minutes later,
and memcache reset fine, so it's not the code.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To un
Sorry for another message, but I just read that this function returns
a bool. I haven't checked it so far, because I expected the function
to trigger an exception when it fails. So is this expected behaviour
then?
--
You received this message because you are subscribed to the Google Groups
"Goog
> Correct - you need to check the return code. This is designed for
> compatibility with the memcached interface, which uses return codes rather
> than exceptions.
Thank you for clarifying.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
T
I've got an hourly cron, which inits a task. This works well, but I
just noticed for the second time this week what is very likely a bug.
0.1.0.1 - - [17/Feb/2011:23:00:00 -0800] "GET /task/init HTTP/1.1" 200
ms=22
0.1.0.2 - - [17/Feb/2011:23:01:43 -0800] "POST /task/get HTTP/1.1" 200
ms=103318
0.
I've got one further question: what does the function return during
maintenance? Also, is there any scenario possible where database is in
maintenance but memcache isn't? I already check for database
maintenance earlier during the task.
--
You received this message because you are subscribed to t
I meant the other way around: scenario where memcache is in
maintenance but database isn't.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this gro
> That said they may only put one in maintenance so the only way to really
> check is to use the capabilities
> API.http://code.google.com/intl/en/appengine/docs/python/capabilities/ove...
Thanks, I didn't know about the API. I used
apiproxy_errors.CapabilityDisabledError to check for database
ma
Well the return code seems to be flaky. I've got the following code.
if CapabilitySet('memcache').is_enabled():
if not memcache.flush_all():
raise Error('RESET')
And yet I've noticed once that memcache was not reset, but no
exception was raised.
--
You received this message because you ar
I've got generous free quotas in my dashboard :D
Datastore API Calls 0% 7,405 of 9,223,372,036,854,775,808
Okay
Datastore Queries0% 132 of 9,223,372,036,854,775,808
Okay
Data Sent to Datastore API 0% 0.07 of 8,589,934,592.00
I've noticed it can take up to 48h until data is deleted (or
recalculated).
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email t
Thank you, Google! Another great release.
I actually wanted to ask about directly addressing instances today.
I've got one task which takes few minutes and is resource intensive,
and when normal web requests are assigned to the same instance it can
result in long response times. Do I understand co
I've tried backends, works great, but I've been wondering why tasks
running on backends are not logged automatically.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To
> Anyway, in the FAQ, I'd like a transparent, honest answer about why the
> switch from CPU-hours to instance-hours (not a vague 'based on the value of
> the service', 'based on feedback'), and a comprehensive outline of the
> ramifications.
"In its three short year history, Google App Engine has
I'm quite sure this has been fixed as of yesterday.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to
google-appengine+unsu
I've noticed that log messages are only logged with the request if
logging() is not used more than twice. Otherwise I have to select
'Info' in the filter. I've set logservice.AUTOFLUSH_ENABLED to False
already.
This is a request with 4 log messages and no filter in the logs
(dashboard).
2011-09-1
Slight mistake: the first request only has 3 messages of course, not
that it makes a difference.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from thi
I understand this, but why the difference? I very much prefer to not
have to switch the filter with >2 messages.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsu
http://code.google.com/appengine/docs/python/backends/overview.html#Billing_Quotas_and_Limits
- Task Queue tasks have a 24-hour deadline when sent to a backend
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send ema
http://code.google.com/p/googleappengine/issues/detail?id=5870
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to google-appengine@googlegroups.com.
To unsubscribe from this group, send email to
google-app
When will the docs for async memcache be added? I've got one
particular question.
In transactions, when you use db.put_async(), the transaction
guarantees the put even if get_result() is not called. As I understand
it, the transaction automatically calls get_result() for the put.
How is this hand
1 - 100 of 400 matches
Mail list logo