I like it. The only thing which I feel is missing would be an official API to 
get the operating environments default encoding (essentially to get the value 
used if COMPAT would have been specified).

For example, in our server application, we do have some code which is specified 
as using exactly this charset  (I.e. if user configures targetEncoding=PLATFORM 
we are using intentionally the no-arg APIs). We can change that code to specify 
a Charset, but then we need a way to retrieve that - without poking into 
unsupported system properties or environment properties. For example 
System.platformCharset().

I understand that this might have it’s own complications - as not all OS have 
this concept (for example on Windows there might be different codepages 
depending on the unicode status of an application). But falling back to today’s 
file.encoding code would at least be consistent and the behavior most 
implementer would desire when adapting legacy code to this JEP.

Gruss
Bernd
--
http://bernd.eckenfels.net
________________________________
Von: core-libs-dev <core-libs-dev-r...@openjdk.java.net> im Auftrag von 
mark.reinh...@oracle.com <mark.reinh...@oracle.com>
Gesendet: Thursday, March 11, 2021 1:27:05 AM
An: naoto.s...@oracle.com <naoto.s...@oracle.com>
Cc: core-libs-dev@openjdk.java.net <core-libs-dev@openjdk.java.net>; 
jdk-...@openjdk.java.net <jdk-...@openjdk.java.net>
Betreff: New candidate JEP: 400: UTF-8 by Default

https://openjdk.java.net/jeps/400

  Summary: Use UTF-8 as the JDK's default charset, so that APIs that
  depend on the default charset behave consistently across all platforms
  and independently of the user’s locale and configuration.

- Mark

Reply via email to