On 02/25/2016 01:29 PM, Richard Warburton wrote:
Hi,

            +1. I was confused by this behaviour when I submitted a String 
related

        patch a while back but never got round to submitting a fix. It actually
        means that in String decoding often passing the charset to use by 
String is
        faster than passing it Charset object - counter-intuitive and less 
typesafe.


    We can't cache the "coder" from a passing in charset for security reason.


Thanks for reminding me of this, apologies I had forgotten the issue.

Elsewhere in the string encoding/decoding code there is (or at least was the last time I 
looked) an assumption that certain charset implementations are "trusted" - 
basically ones by Oracle. It used to be ones on the bootclasspath, I don't know about 
what happens in the modular world. Wouldn't it be reasonable to trust those same charsets 
with this optimisation as well? They are the most commonly used.



It's a good point. I think we probably can/should trust those charsets.

sherman

Reply via email to