+1 to string16

I can't make performance or memory saving claims with a straight face
for any. We just don't process enough strings for us to matter.


On Feb 4, 9:57 am, Mike Belshe <[email protected]> wrote:
> The big string area is webkit, of course.  If webkit were 100% UTF-8
> already, we might take a different stance on this issue as well.
>
> If it is our goal to get to UTF-8 everywhere, then laying the plumbing for
> utf8 strings rather than string16 strings seems like the right thing to do.
>
> Mike
>
> On Wed, Feb 4, 2009 at 9:52 AM, Darin Fisher <[email protected]> wrote:
> > On Wed, Feb 4, 2009 at 9:35 AM, Dean McNamee <[email protected]> wrote:
>
> >> On Wed, Feb 4, 2009 at 6:11 PM, Darin Fisher <[email protected]> wrote:
> >> > The proposal was to search-n-replace std::wstring to string16.  We would
> >> > have to invent a macro to replace L"" usage.  Most usages of string
> >> literals
> >> > are in unit tests, so it doesn't seem to matter if there is cost
> >> associated
> >> > with the macro.
> >> > My belief is that there isn't much fruit to be had by converting
> >> everything
> >> > to UTF-8.  I fear people passing non-UTF-8 strings around using
> >> std::string
> >> > and the bugs that ensue from that.  We've had those problems in areas
> >> that
> >> > deal with UTF-8 and non-UTF-8 byte arrays.
> >> > Whenever we have a string16 or a wstring, it means implicitly that we
> >> have
> >> > unicode that can be displayed to the user.  So, the compiler helps us
> >> not
> >> > screw up.
>
> >> This seems to be the only argument you make, that by making string16 a
> >> new type, we know it's encoding.  This can be solved by many other
> >> ways by keeping utf8.  We can add a new utf8 string class if you
> >> really wanted, or we can just be diligent and make sure to DCHECK in
> >> methods that expect a specific encoding.  Have we had a lot of these
> >> problems?  Do you have some examples?  It would help me figure out
> >> solutions for better checking for utf-8.
>
> > We have had a lot of these problems in the code that interfaces with
> > WinHTTP and other networking code where std::string is used to relay
> > headers, which do not necessarily have a known encoding.  I've also seen
> > this kind of problem over-and-over-again in the Mozilla code base.
>
> > I think we have much bigger fish to fry....  so, I'd need to hear a
> > convincing argument about why investing time and energy in converting from
> > UTF-16 to UTF-8 is a good idea.
>
> > -Darin
>
> >> > If someone can make a compelling performance argument for changing
> >> Chrome's
> >> > UI over to UTF-8 and also invent a solution that avoids the problem I
> >> > described above, then converting to UTF-8 would seem OK to me.  But,
> >> right
> >> > now... it just looks like cost for not much benefit.
> >> > -Darin
>
> >> > On Wed, Feb 4, 2009 at 8:21 AM, Evan Martin <[email protected]> wrote:
>
> >> >> On Wed, Feb 4, 2009 at 6:53 AM, Dean McNamee <[email protected]>
> >> wrote:
> >> >> > I apologize for missing this discussion, I'm sure that I'm not seeing
> >> >> > the entire picture and the pros of this argument.  I mentioned before
> >> >> > that I'm in support of utf-8 everywhere we can get it.
>
> >> >> I lost this argument, so I will defer this response to someone else.
> >>  :)
--~--~---------~--~----~------------~-------~--~----~
Chromium Developers mailing list: [email protected] 
View archives, change email options, or unsubscribe: 
    http://groups.google.com/group/chromium-dev
-~----------~----~----~----~------~----~------~--~---

Reply via email to