On Wednesday, August 1, 2012 12:12:05 AM UTC+2, Jeff Schnitzer wrote:
>
>
> At this point all I can think of is to use something external to GAE 
> to queue the submissions.  A simple custom in-memory server using 
> technology that handles concurrency well (eg Java) would work.  For 
> something more off-the-shelf, a Redis instance fronted by a couple web 
> frontends to handle the submit() and reap() requests.
>

Unless you move the whole thing out of GAE, I suspect 1000 TPS could well 
suffer from variable performance whether you're using puts, tasks or URL 
fetch.  Before I built out some second external setup and spent time 
integrating it, I'd look at reducing the transactions within GAE, again by 
batching them.

For example:
Each instance receives values, caches them in memory (maybe add memcache as 
backup), submits them at the end of each second (or two, or whenever max 
entity size is hit) as one blob-like entity, via whatever medium ends up 
working best.  The risk here is that the instance fails, but I'd get it 
working now and then improve the resilience later, because this devil is 
worse than the one Richard has now.

This way, we're back to using e.g. put(), but hopefully changing 1000/sec 
into 100, 50 or 10/sec.

What sucks about this idea?

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/google-appengine/-/JzFdo5y7RvkJ.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to