Hi folks.

Today, bytes that come in from the network get turned into UTF-16 by the 
decoding process. We then turn some of them back into Latin-1 during the 
parsing process. Should we make changes so there’s an 8-bit path? It might be 
as simple as writing code that has more of an all-ASCII special case in 
TextCodecUTF8 and something similar in TextCodecWindowsLatin1.

Is there something significant to be gained here? I’ve been wondering this for 
a while, so I thought I’d ask the rest of the WebKit contributors.

-- Darin
_______________________________________________
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev

Reply via email to