On Fri, Jul 8, 2011 at 12:42 PM, an0nym <[email protected]> wrote:
>
>  If I need to fetch/write 200 bytes of data for a small entity, and instead
> of this I fetch/write 1 Mb huge entity (by the way, consume google internal
> bandwidth, maybe even across datacenters with hrd, I don't even say about
> google internal cpu usage intensity increase) in order to get these 200
> bytes out - I end up with higher latency (to parse 1 Mb is longer than to
> parse 200 bytes), higher cpu usage intensity (to parse 1 Mb is harder than
> to parse 200 bytes), higher memory usage (intermediate results should be
> saved somewhere) and higher memcache memory usage (I don't want to parse it
> every time, huh).
> Are sure google won't increase the prices again because of these points?

I'm not *sure* of anything, but it's not too hard to look at
architecture decisions and infer the rationale behind them.

Based on pricing (big entities are priced the same as small entities)
and API design (there is no way to select out parts of an entity) I
presume that the raw size of an entity isn't all that significant of a
problem.  At least, it's not as significant a problem as fetching
multiple entities, which is what you're likely to do when you have a
heavily normalized structure.

Still, experimental data rules.  I'd love to see a test of comparable
data models.

Jeff

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to