Yes, you do. Welcome to the wonderful awful world of creating applications that will scale.
Go learn about sharded counters next. :) On Oct 21, 2011, at 4:03 PM, Philip wrote: > So that's what seems odd to me. The HR datastore is best used with > entity groups and yet there is such a huge limit on its throughput. > > Since we can't rely on memcached values staying around (since they > might be purged at any time), there seems to be no decent workaround > for having immediately consistent reads (in case memcache is purged) > without being limited to 1 write per second. > > Do I have that write (pun intended)? > > On Oct 21, 12:32 pm, Steve Sherrie <[email protected]> wrote: >> This just refers to the entity group write limit that is the same in >> both MS and HR datastores. >> >> On 11-10-21 03:30 PM, Philip wrote: >> >> >> >> >> >> >> >>> I am concerned about the statement: >> >>> "This allows queries on a single guestbook to be strongly consistent, >>> but also limits changes to the guestbook to * 1 write per second * >>> (the supported limit for entity groups)." >>> http://code.google.com/appengine/docs/python/datastore/hr/overview.html. >> >>> Is it true that writes are limited to 1 per second when using the high >>> replication datastore or is this an old limitation? > > -- > You received this message because you are subscribed to the Google Groups > "Google App Engine" group. > To post to this group, send email to [email protected]. > To unsubscribe from this group, send email to > [email protected]. > For more options, visit this group at > http://groups.google.com/group/google-appengine?hl=en. > -- You received this message because you are subscribed to the Google Groups "Google App Engine" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/google-appengine?hl=en.
