--- Comment #6 from Steven Schveighoffer <schvei...@yahoo.com> 2010-07-02
10:30:19 PDT ---
OK, here's the deal:
When the current block can be extended into the next page, it is. This does
not take into account any scaling factors. That is, if you ask for one more
byte, and a page can be lumped onto the end, then only 1 page is added. This
explains the behavior most of the time.
If a page cannot be added, then it uses the newCapacity function. However, I
think this function may return too high a value.
Essentially, the function looks like this:
100 + (1000L * size) / log2plus1(newcap);
Where size is the size of a single element in bytes, newcap is the size of the
*total* memory being requested (old size + appended size) in bytes, and
log2plus1 essentially gets 1 + highest bit set. The result is a percentage of
the total size requested to use.
Where I think the problem is using size in this equation. I don't really
understand why size has to be taken into account since newcap already takes
size into account.
What happens is, let's say you are doing an array of bytes, and you are
requesting 100 bytes. size is 1, newcap is 100. log2plus1(100) is 7. So the
resulting formula looks like:
100 + (1000L * 1) / 7 => 242, meaning 242% of the original size requested.
Now, if we use the integer, which is 4 bytes, as our array type, the formula
now has 400 as newcap and size is 4. Resulting in:
100 + (1000L * 4) / 9 => 544, meaning 544% of the original size requested.
I'm not sure why larger array element types should need more relative space, it
doesn't make a lot of sense, but not much is explained in the comment on why
the formula is the way it is.
I'm unsure how to change this, any ideas?
For one, I think we can use the newCapacity function always, even when
appending pages (I think it will just append as many pages as it can up to the
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
------- You are receiving this mail because: -------