Hi,
If reading 1000+ user records(say keys) in ONE task execution is
problem then it doesnt matter if you use Memcache or data store, your
solution is not gonna be scalable to update 1000+ user record(for last
poll time) in ONE task execution.

What i understood your problem is that you want to keep every user's
last poll time updated against there user data. And as a solution you
are running kind of Timer thread in the form of Task Queus and update
all user's last poll time who accessed/polled in last 30/n seconds.
I am specifying a little different approach depending on my
understanding about your problem

Using memcache Solution 1)

>You dont need to create 30 poll cache or n poll cache. Just create one 
>Set<key> object and put it in memcache and your polling request handler will 
>keep adding user's id/key in this cache.Using Set/HashSet will make sure that 
>you dont see to worry about merging or duplicacy.

> After 30/n seconds your task start executing it will take all the 
> data(Set<Key> from cache and then remove everything from set in memcache, so 
> that your polling request handler will find it empty and then they will start 
> adding adding new user id/key in it.

> Now it may be possible that you got a set of 10000 or more from cache and 
> task execution may not be able to update all in 30 seconds, so you may want 
> to create another sub task who will take left over user key and do the same 
> thing again. Basically you will put the left over ids in memcache again with 
> different id and pass that memcach id to sub-task and that task can read it 
> from there. Or to make it more efficient, if you already know by experience 
> that you are able to update on 500 records in one execution then divide your 
> original left over user ids and divide it into set of 500 and create as many 
> task as you want and save set of 500 in memcache with different ids like 
> LEFT_OVER_SET_1, LEFT_OVER_SET_2 etc. and each task can read daat from 
> memcache using these provided ids and if something left over it will try to 
> invoke sub task again(divide by 500, but this time you will find total number 
> is already less then 500).

But still i say memcach edoesnt guarntee that over a span of time data
will still be avaiable so instead of memcache you can use datastore in
above solution.

Ravi.




-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to