Sorry to bring this thread back up again, but I've noticed quite a lot
of issues being posted to this and other groups about the task queue
system failing - and scrolling back through the issues page (http://
code.google.com/status/appengine) it's always the taskqueue that has
problems. Those of
Jeff - I'm a bit confused. I thought that the whole idea of the
datastore was that you could read or write as much as you want, as
fast as you want as long as they are not related? So one datastore
write per vote (and being written to different entity groups) should
be fine? I thought that the
On Mon, Oct 3, 2011 at 9:24 AM, Mat Jaggard matjagg...@gmail.com wrote:
Jeff - I'm a bit confused. I thought that the whole idea of the
datastore was that you could read or write as much as you want, as
fast as you want as long as they are not related? So one datastore
write per vote (and
After each vote we want to send back the actual state of voted object
(actual votes)... so, we need to store the number of votes and not
only the deltas.
Actual state of the votes we could store in backends cache, and in
batch write changes in db.
What do you think about this solution?
I
Will that end up being cheaper than the following?
Put a new entity in the datastore for each vote - Kind: Vote, ID:
Auto generated, Vote for: Item S
Have a task queue query all the votes and delete them then write the
count of votes to a global object.
Cost = 1 datastore read + 1 datastore
Assuming the goal is massive write throughput, you don't want to do 1
write per vote. You need to aggregate writes into a batch - you can
do that with pull queues, but then you're limited to the maximum
throughput of a pull queue. And the biggest batch size is 1,000
which might actually be
Price:
- with backends lets say 3 B2 machines = 350USD/Month
- UrlFetch Data Sent/Received 0,15USD/GB
Limit:
- URL Fetch Daily Limit 46,000,000 calls
this can be a problem...but I see it is possible to request an
increase
Write data parallel in DB: Task Queue with rate
It's hard to say here if we're talking about the same thing, but
here's how I would do it:
* Updates go through to the backend, which stores write deltas in ram
(not the total).
* Reads read-through memcache into the datastore.
* The backend writes deltas to the datastore in batch, updating
Many writes to the same object will lead to db failures. You really
should consider sharding:
http://code.google.com/appengine/articles/sharding_counters.html
On Sep 26, 12:41 am, Peter Dev dev133...@gmail.com wrote:
We are developing an application, where users can vote for many
objects.
Shared counter is cool and I use it... but if you have millions of
objects I cannot imagine how to manage them.
1 000 000 obj x 100 shards = 10 000 000 counters
1. How to reset them to 0 in specified periods?
2. How to set the shared sum for each object to show top 100 objects?
3. Too much DB API
Sorry, 100 000 000 counters
On Sep 27, 4:53 pm, Peter Dev dev133...@gmail.com wrote:
Shared counter is cool and I use it... but if you have millions of
objects I cannot imagine how to manage them.1 000 000obj x 100 shards =10 000
000counters
1. How to reset them to 0 in specified periods?
Yeah, messy.
I'd use a backend for this. Possibly a set of backends if you need to
shard the data for write volume. I'd use Memcache only to cache the
count reads.
The basic entity is just an id and a count. An increment request goes
to a backend, which simply tracks the change. A batch
12 matches
Mail list logo