On 4/20/20 6:09 PM, Adam Retter wrote:
> I was surprised that by my findings that:
>
> 1. On JDK 7 and 8 with HotSpot - getting the bytes of a UTF-8 string
> where all chars are '0' wants to allocate an array larger than the VM
> limit, whereas the same operation on ASCII and ISO-8859-1 do not. If
Hi,
I'm on Linux, but the explanation might be the same as the following one.
An easier way to obtain the same error on OpenJDK8 + HotSpot is to execute
byte[] b = new byte[Integer.MAX_VALUE];
which is exactly what happens behind the scenes in the UTF-8 case.
The encoder pessimistically ass
Hi there,
I am not sure if the following is expected behaviour or possibly
indicates one or more bugs? Regardless, the behaviour was surprising
to me as it seems to vary between JVM versions and vendors.
The Java code is simply:
import java.io.UnsupportedEncodingException;
import java.nio.charse