I'm not 100%, but I believe the problem is that the unpooled buffer is
self-expanding and the pooled buffer is not. The JavaDoc for
io.netty.buffer.Unpooled#buffer() states:

> Creates a new big-endian Java heap buffer with reasonably small initial
capacity, which expands its capacity boundlessly on demand.

If you size the pooled buffer correctly (>= 33) the failing tests will
succeed.


Justin

On Sat, Mar 2, 2024 at 6:12 PM Havret <h4v...@gmail.com> wrote:

> I'm including one more failing tests:
>
>    @Test
>    public void encodeAndDecodeMediumSizeStringOfMoreThan26Chars() {
>       String string = "60v4MKNelDuNDUHn8itGjok2HN0";
>       ActiveMQBuffer pooledBuffer = ActiveMQBuffers.pooledBuffer(21);
>       pooledBuffer.writeString(string);
>
>       var array = pooledBuffer.toByteBuffer().array();
>       ActiveMQBuffer activeMQBuffer = ActiveMQBuffers.wrappedBuffer(array);
>
>       String decoded = activeMQBuffer.readString();
>
>       Assert.assertEquals(string, decoded);
>    }
>
> org.junit.ComparisonFailure:
> Expected :60v4MKNelDuNDUHn8itGjok2HN0
> Actual   :���������������������������
>
> On Sun, Mar 3, 2024 at 12:44 AM Havret <hav...@apache.org> wrote:
>
> > Hi,
> >
> > I've recently started working on the dotnet Artemis Client that's going
> to
> > use Core protocol. During my work on the binary encoder, I've hit a weird
> > issue. Everything works fine when I encode strings up to 26 characters.
> > But, for strings longer than 26 characters, the binary layout just goes
> > haywire—there's this huge 8k bytes gap from where the last data was
> > encoded, and I can't figure out why.
> >
> > I managed to boil it down to 2 simple test cases you can find here:
> > https://gist.github.com/Havret/486a5acc339c67cdc11eccc33e54b178. The
> > unpooledBuffer behaves as expected, just like in my dotnet setup, but the
> > pooledBuffer acts up strangely. Is this some bug in the Artemis core
> > encoding, or am I missing something obvious?
> >
> > Thanks for any light you can shed on this,
> > Krzysztof
> >
> >
> >
>

Reply via email to