Thanks Waleed for your detailed reply.

These are very good suggestions, very good ways to solve my problems,
very inspiring. thank you so much!

On 11月10日, 上午6時55分, Waleed Abdulla <[email protected]> wrote:
> The math looks right, but it might be missing a few additional cost items.
> I think you're assuming 2 writes for index updates (ascending and
> descending). Most likely you'll also need a composite index that includes
> the date so you can show the items in a reverse chronological order
> (assuming your feed is like twitter). So I'd say the cost is $0.06 in index
> write ops per update rather than $0.04.
>
> Depending on your specific requirements, you might be able to find
> optimizations over the general case that could save you a lot. A few ideas:
>
> - Create your own indexes. Native indexes require 2 ops for each property +
> 1 op for each composite index. What if you create your own UserStream
> entity that contains the latest 500 updates for that user? With every new
> update, you fetch and update each UserStream entity. That's 1 get + 1 write
> per update per user, as opposed to 3 writes per update per user. Not a huge
> saving, but it also gives you the option to choose to not update all users
> streams (i.e. ignore those who haven't logged in for a while).
>
> - If you have a limit on how many sources a user can follow (say, 400 max),
> then you can create an index entity for each source, SourceFeed, that
> contains the latest, say, 100 updates from that source. Then, to generate
> the stream of one user, you load the 400 entities with db.get([list]),
> which loads them in parallel, and then generate the stream in memory. This
> will work only if you have a small limit on how many sources a user can
> follow. The cost here will be when the user views the stream rather than
> when an update is made. The cost will be lower only if your users don't
> login too often. Otherwise, the previous solutions would be better. Here,
> you can also choose to store the SourceFeed of popular sources in memcache
> to further reduce read ops.
>
> - You can use a backend instance with a lot of memory to store big parts of
> recent updates so that for most users you can generate the stream in
> memory.
>
> Waleed
>

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to