If you're storing UTF8 anyway, why not just use regular character strings? Doesn't it defeat the purpose of using UTF8 if you're combining it with a character type that isn't 1 byte?
On Wed Oct 29 2014 at 11:27:29 AM Kate Stone <katherine_st...@apple.com> wrote: > On Oct 28, 2014, at 1:55 PM, Zachary Turner <ztur...@google.com> wrote: > > On Tue Oct 28 2014 at 1:46:26 PM Vince Harron <vhar...@google.com> wrote: > >> >> > - rework the Editline rewrite, so it either uses standard 8 bit chars, >> or wchar_t/utf8 depending on the platform. This would be conditionally >> built depending on the platform. >> >> This would be my favorite option if possible. wchar_t never really took >> roots in Linux AFAIK. >> > > Also probably the best option for Windows, although it's worth pointing > out that at least for now, most other stuff in LLDB doesn't really use wide > character strings either, so char would be the path of least resistance for > Windows right now. > > > With the Editline rewrite I made the explicit decision to insulate the > rest of LLDB from wide characters and strings by encoding everything as > UTF8. I agree that reverting to char-only input is a perfectly reasonable > solution for platforms that don't yet include wchar-aware libedit > implementations. > > Kate Stone k8st...@apple.com > Xcode Runtime Analysis Tools >
_______________________________________________ lldb-dev mailing list lldb-dev@cs.uiuc.edu http://lists.cs.uiuc.edu/mailman/listinfo/lldb-dev