Thank you Archie and Chen!
  Chen - I’m prototyping the generic allocator you describe and it’s extremely 
effective for Objects – but I’m hamstrung by trying to use generics on 
primitive byte.  I’m not aware of a way to work around that, and changing the 
array from byte[] to Byte[] would be a terrible idea, so I think we’re looking 
at two different allocators.  The template suggested by Archie may help 
implement that, but ultimately it’ll be multiple classes.
  Archie – your suggestion generally matches the implementation on the PR, 
except that the implementation is flexible on the segment size and each 
instance “self-tunes” based on inputs.  There are a few hard-coded scaling 
constants that we could consider tweaking, but my perf tests so far show 
they’re reasonable in the general case.  Self-managing eliminates guesswork 
about N and, most importantly, eliminates duplicative copying/allocation after 
the byte has been recorded.  The benchmark tests a handful of hard-coded sizes 
and can easily be expanded to handle more, at the expense of longer runtimes.
  I’ll update the PR later today with these new suggestions alongside the 
current, so we can clearly evaluate pros and cons.
  Thanks!
     John

At the risk of repeating my previous 
comment<https://mail.openjdk.org/pipermail/core-libs-dev/2025-March/141871.html>,
 I agree with Chen.

That is to say, there is a separate, more fundamental unsolved problem lurking 
underneath this discussion, and the two problem "layers" are perhaps better 
addressed separately.

Once the lower layer problem is properly framed and resolved, it becomes 
reusable, and wrapping it to solve various higher-layer problems is easy.

An internal class would be a reasonable and conservative way to start. There 
could even be a suite of such classes, built from templates a la 
X-Buffer.java.template.

These could be used all over the place (e.g., refactor StringBuilder). For 
example, I wonder how much the performance of e.g. ArrayList could be improved 
in scenarios where you are building (or removing elements from) large lists?

Just thinking out loud (apologies)... Define a "segmented array allocator" as 
an in-memory byte[] array builder that "chunks" the data into individual 
segments of size at most N.

We can think of the current ByteArrayOutputStream as such a thing with N = 2³² 
that is, there's only ever one "chunk".

The assertion is that N = 2³² is not the most efficient value. And obviously 
neither is N = 1.

So somewhere in the middle there is an optimal value for N, which presumably 
could be discovered via experimentation. It may be different for different 
architectures.

Another parameter would be: What is the size M ≤ N of a new chunk? E.g. you 
could start with M = 16 and then the chunk grows exponentially until it reaches 
N, at which point you start a new chunk. The optimal value for M could also be 
performance tested (it may already have been).

Of course, for performance optimization we'd need some distribution of array 
sizes that models "typical" use, etc.

-Archie

On Wed, Apr 9, 2025 at 6:19 PM Chen Liang 
<liangchenb...@gmail.com<mailto:liangchenb...@gmail.com>> wrote:
Hi John Engebretson,
I still wonder if we can make the byte array allocator a utility to the JDK, at 
least an internal one. I find that besides replacing BAOS uses, it can also 
optimize users like InputStream.readNBytes, BufWriterImpl of classfile, and 
maybe many more usages. Such an internal addition may be accepted to the JDK 
immediately because it has no compatibility impact and does not need to undergo 
CSR review.

Chen Liang


Reply via email to