On 4/29/12 9:30 PM, Leif Hedstrom wrote:
On 4/23/12 8:04 PM, taorui wrote:
hmmm, The limitation of size of an object can be cached is not specific,
but we can estimate the limitation through the source. (suppose every
fragment is 1M except the first fragment).

agg_buf is 4M, which means the first fragment can not exceed 4M, then
most fragments of an obj can be cached is (4M - sizeof(Doc) - hdr_len) /
sizeof(Frag). So we can get the largest size of an obj can be cached is
((4M - sizeof(Doc) - hdr_len) / 8) * 1M, that can not be exceed 5G.


Hmmm, maybe my math is completely off, but, lets see:

    1. sizeof(Doc) == 72 bytes
    2. sizeof(Frag) = 8 bytes
    3. I'm not sure what hdr_len is (maybe this is the part I'm missing?)


With that, I get

    ((4 * 1024^2 - 72) / 8) * 1 * 1024^2 == 549746376704


As far as I can tell, that is ballpark 512GB, and not 5GB. What did I miss? Meaning, unless I'm mistaken, our max file size should be closer to 500GB and not 5GB ?

Looking at this some more, I think hdr_len is hlen, which I'm guessing is in the order of a few KBs. So, it should still be close to the above, which is a file size max of ~500GB. There would have to be some significant overhead in something else for it to get to 5GB I think (e.g. if we consume much more than sizeof(Frag) per offset?).

Cheers,

-- Leif

Reply via email to