Yes i understand about that there might be lost in few entries. That you can
fix in this solution by many ways..one of the way by keeping two sets then.(
i thought of explaning it before but mail was getting longer :) )

Ok let me explain in detail

But this to work the both cache need to have seprate Id like and expire time
say 30 second or more.

Key
Value                                          description
MY_USER_CACHE_FIRST      Set<Key> users                          for poll
request between 00 to 30 seconds
MY_USER_CACHE_LAST       Set<Key> users                          for poll
request between 31 to 60 seconds

and each poll handler will see if i am between second 00 to 30 or 31 to 60

So if a poll come at 03:12:09(XX:YY:09) AM it will go to cache
MY_USER_CACHE_FIRST
and if request comes at 03:12:35(XX:YY:35) AM it will go to cache
MY_USER_CACHE_LAST.

And run your task at every 11:29:35((XX:YY:35) and 11:30:05((XX:YY:05),
so 11:29:35(XX:YY:35) task will receive all users polled between
11:29:00(XX:YY:00) to 11:29:30(XX:YY:30).
and 11:29:35(XX:YY:05) task will receive all users polled between
11:29:00(XX:(YY-1):00) to 11:29:30(XX:(YY-1):30). at the end of hour even XX
will be XX-1

11:29:35 task will always read thye cache MY_USER_CACHE_FIRST and work on it
as i explained before, in case u receive millions of records and divide into
small task put seprately into small memcahe object etc
and 11:30:05 task will always work on MY_USER_CACHE_LAST.


I kept 5 second different between last entry going into cache and starting
the task you can choose this difference as you want, may be just one second.


Ravi.







On Thu, Jun 24, 2010 at 7:04 PM, Martin Webb <[email protected]> wrote:

> @ping2ravi
> this wont do what was required as if every-time you set mcache you will
> renew the 30 second lifespan. further if mcache does clear on the 30 sec
> count a poll 1 sec ago would be lost - that was the whole point of using 30
> 1 sec poles as they all clear on deadline and don't get repeatedly
> refreshed?
>
>
>
> Regards
>
>
>
>
>
> *Martin Webb*
> **
>
>
>
> The information contained in this email is confidential and may contain
> proprietary information. It is meant solely for the intended recipient.
> Access to this email by anyone else is unauthorised. If you are not the
> intended recipient, any disclosure, copying, distribution or any action
> taken or omitted in reliance on this, is prohibited and may be unlawful. No
> liability or responsibility is accepted if information or data is, for
> whatever reason corrupted or does not reach its intended recipient. No
> warranty is given that this email is free of viruses. The views expressed in
> this email are, unless otherwise stated, those of the author
>
>
>
>
>
>
> ------------------------------
> *From:* ping2ravi <[email protected]>
> *To:* Google App Engine <[email protected]>
> *Sent:* Thu, 24 June, 2010 13:41:50
> *Subject:* [google-appengine] Re: What is a pattern for keeping track of
> current users in google app engine?
>
> Hi,
> If reading 1000+ user records(say keys) in ONE task execution is
> problem then it doesnt matter if you use Memcache or data store, your
> solution is not gonna be scalable to update 1000+ user record(for last
> poll time) in ONE task execution.
>
> What i understood your problem is that you want to keep every user's
> last poll time updated against there user data. And as a solution you
> are running kind of Timer thread in the form of Task Queus and update
> all user's last poll time who accessed/polled in last 30/n seconds.
> I am specifying a little different approach depending on my
> understanding about your problem
>
> Using memcache Solution 1)
>
> >You dont need to create 30 poll cache or n poll cache. Just create one
> Set<key> object and put it in memcache and your polling request handler will
> keep adding user's id/key in this cache.Using Set/HashSet will make sure
> that you dont see to worry about merging or duplicacy.
>
> > After 30/n seconds your task start executing it will take all the
> data(Set<Key> from cache and then remove everything from set in memcache, so
> that your polling request handler will find it empty and then they will
> start adding adding new user id/key in it.
>
> > Now it may be possible that you got a set of 10000 or more from cache and
> task execution may not be able to update all in 30 seconds, so you may want
> to create another sub task who will take left over user key and do the same
> thing again. Basically you will put the left over ids in memcache again with
> different id and pass that memcach id to sub-task and that task can read it
> from there. Or to make it more efficient, if you already know by experience
> that you are able to update on 500 records in one execution then divide your
> original left over user ids and divide it into set of 500 and create as many
> task as you want and save set of 500 in memcache with different ids like
> LEFT_OVER_SET_1, LEFT_OVER_SET_2 etc. and each task can read daat from
> memcache using these provided ids and if something left over it will try to
> invoke sub task again(divide by 500, but this time you will find total
> number is already less then 500).
>
> But still i say memcach edoesnt guarntee that over a span of time data
> will still be avaiable so instead of memcache you can use datastore in
> above solution.
>
> Ravi.
>
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to [email protected].
> To unsubscribe from this group, send email to google-appengine+
> [email protected].
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "Google App Engine" group.
> To post to this group, send email to [email protected].
> To unsubscribe from this group, send email to
> [email protected]<google-appengine%[email protected]>
> .
> For more options, visit this group at
> http://groups.google.com/group/google-appengine?hl=en.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to