The key for the pole would be made as follows
second = #get the min interval of the current time
if second>29
second=second-30
#this gives us 0-29 and then 0-29 again for one min
then make the mcache key as
key='pole'+second
I don't know how many users may pole in one second - but i cant see merging a
list stored in each pole would be too much of a handful.
If 30 poles are too many use 15 - 1 for each 2 second element or 3
Further more i don't know what info you need on the user - if all you want is
their identifier - that can be stored in the list or their email
this would save on needing to get the users data from the model using the key.
Martin
> You wouldn't - the concept to use mcache would be to make a hash or a
> token for the time second'th element i.e 1-30
> store each pole user in the correct element
> and then read the users back out of mcache
> by looping the 30 or mcache containers
>
> I did give this some thought last night when i saw your post but i ran
> out of time
> the idea would be
>
> lets say we use pole+1 to 30 as a key
> pole1 - happened 1 sec ago
> pole29 - happened 29 sec ago
> pole30 - will be removed when 31 secs elapse - as we set the mcache to
> destroy poles after 30 seconds
>
> then when somone poles we get the time convert it into a number 1-30
> being second elements in a min - this need some thought!!!! maybe
> seconds devide by 2 - something like that
Surely not, but seconds%30 should work. However, this way you store
users who were polling multiple time during the last 30 seconds in
multiple poles. When computing the user list, you need to read all 30
poles and merge them.
Maybe saving a list of pairs (user, time) in a single pole could be
better, and a dictionary mapping users to time would be even better.
Each time you would set the value for the current user. From time to
time you should evict too old entries, so the data doesn't grow too much.
This way, you could use multiple poles too, let's say four of them,
indexed by minutes%4 with a timeout like 1 minute. This prevents the
data growth and computing the user list you need to look at only the
last one or two poles.
If there were thousands of users active in the last minute, the data
could be so large, that the overhead with reading and storing might be
high (no idea how efficient memcache loads and stores work). In such a
case splitting users in groups (determined eg. by something like
username.hashCode()%8 could help.
I'm not sure, if this is the way to go, but it sounds quite simple.
> we then add the user_key to to the pole. maybe a dictionary?
> then every second when you task runs
> you load using a loop pole1....pole30 grab all the keys in the dict and
> make a list
> then if you need to load the models for the user you go a db.get(list)
>
> i think that starting to sound like it might work - sure some bright
> spark can add some more detail
> hope that helps
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/google-appengine?hl=en.
--
You received this message because you are subscribed to the Google Groups
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to
[email protected].
For more options, visit this group at
http://groups.google.com/group/google-appengine?hl=en.