If Richard's metrics are right (he has created a very exact timing
system which accounts for skew, so I suspect they are), there's
sometimes a 4s+ delay between submitting a task to a pull queue and
when it becomes available for lease.  This makes pull queues difficult
for precisely timing data collection and reaping in a 10s window.

Honestly I'm struggling to find an in-appengine solution to this
problem.  Let's say at T+0 clients submit 1000 data points.  At T+10s,
the data must be collated to produce a leaderboard, and shortly
afterwards clients fetch the leaderboard.

 * Saving to the datastore and querying for the data points suffers
from eventual consistency issues.  It works mostly but not always.

 * Putting items in a pull queue and then fetching them out in batch
would be the most logical solution, but the queue delay causes
problems.  There doesn't seem to be any defined limit to queue
latency.

 * Using in-memory state in a Backend would be the next logical
solution, but backends can't handle the throughput.

At this point I think the GAE toolbox is empty.  It might be possible
to hack something together using memcache but it would be difficult
and fail whenever memcache resets.

At this point all I can think of is to use something external to GAE
to queue the submissions.  A simple custom in-memory server using
technology that handles concurrency well (eg Java) would work.  For
something more off-the-shelf, a Redis instance fronted by a couple web
frontends to handle the submit() and reap() requests.  At least we
*know* that Redis can handle thousands of submissions per second.

Jeff

On Tue, Jul 31, 2012 at 1:54 PM, Richard Watson
<[email protected]> wrote:
> How many queues are you using? Could you add more?
> Can you batch data into fewer tasks?  If they're arriving at such a high
> rate, batching and adding fewer tasks might help.
>
>
> On Tuesday, July 31, 2012 10:10:12 PM UTC+2, Richard wrote:
>>
>> Yeah, but my worry is that if the queue architecture cannot handle 160
>> entries... how will it handle > 1000 ?  Remember these entries hang around
>> for the next game round.  So at that point, each client is also going to
>> have to remove duplicates from other people that were left over from the
>> previous round (hoping this makes sense!).  Or else I purge the queue in
>> between ... after attempting to patch the leaderboards a second time with
>> the extra entries.
>>
>> It just seems like a hack on top of a hack and extremely ugly....
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To view this discussion on the web visit
> https://groups.google.com/d/msg/google-appengine/-/SFiRDIKnOAgJ.
>
> To post to this group, send email to [email protected].
> To unsubscribe from this group, send email to
> [email protected].
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to