I've been doing some reading up on ByteBuffer, and was wondering:

On Mon, 4 Feb 2008, Daniel Noll wrote:
>   1. Lower memory usage due to not keeping a byte[] copy of all data at the
>      POIFS level.

How would this work? Surely we'll still need to read all the bytes that
make up the whole poifs stream, then pass those into our underlying
ByteBuffer? I couldn't figure out a way to do it without processing all
the input stream at least once, since most of them won't support zipping
about to different places

>   2. If you don't ask for a DocumentInputStream for a given Document, the
>      bytes don't even get read.  If you open a stream for a given Document and
>      only read the first part, the rest of the bytes don't even get read.

Again, not sure about that. I can see how we could possibly use a
ByteBuffer to ensure we always use the same set of bytes in all the bits
of poifs (and on up as required), but surely we'll still need to save the
bytes of each DocumentInputStream, otherwise they'll be gone?

> Of course the main beef I have with ByteBuffer is that it is limited to
> Integer.MAX_VALUE size, but I guess with OLE2 this isn't, in practice,
> going to be reached.  I imagine the maximum size for an OLE2 document is
> somewhat lower, although I don't actually know.

Nore do I, but I have a feeling it could well be 2gb too. Surely we have
that 2gb limit already though, since we're reading the poifs data into a
byte array, which has the same restriction?


If we can get some memory savings without too much work by switching to
nio / bytebuffer stuff, I am keen to do it. I'm just struggling, almost
certainly due to being new to it all, to see how it'll deliver much of a
saving just yet. Do please educate me :)

Nick

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to