On Thu, Nov 4, 2010 at 6:09 AM, Simon Marlow marlo...@gmail.com wrote:
On 04/11/2010 02:35, David Sankel wrote:
On Wed, Nov 3, 2010 at 9:00 AM, Simon Marlow marlo...@gmail.com
mailto:marlo...@gmail.com wrote:
On 03/11/2010 10:36, Bulat Ziganshin wrote:
Hello Max,
It is possible to output some non Latin1 symbols if you use the wide
string API but not all of them. Basically the console supports all
European language but nothing else - Latin, Cyrillic and Greek.
2010/11/2 David Sankel cam...@gmail.com:
Is there a ghc wontfix bug ticket for this? Perhaps we
On 2 November 2010 21:05, David Sankel cam...@gmail.com wrote:
Is there a ghc wontfix bug ticket for this? Perhaps we can make a small C
test case and send it to the Microsoft people. Some[1] are reporting success
with Unicode console output.
I confirmed that I can output Chinese unicode from
On Wed, Nov 3, 2010 at 9:00 AM, Simon Marlow marlo...@gmail.com wrote:
On 03/11/2010 10:36, Bulat Ziganshin wrote:
Hello Max,
Wednesday, November 3, 2010, 1:26:50 PM, you wrote:
1. You need to use chcp 65001 to set the console code page to UTF8
2. It is very likely that your Windows
This is evidence for the broken Unicode support in the Windows
terminal and not a problem with GHC. I experienced the same many
times.
2010/11/2 David Sankel cam...@gmail.com:
On Mon, Nov 1, 2010 at 10:20 PM, David Sankel cam...@gmail.com wrote:
Hello all,
I'm attempting to output some
Is there a ghc wontfix bug ticket for this? Perhaps we can make a small C
test case and send it to the Microsoft people. Some[1] are reporting success
with Unicode console output.
David
[1] http://www.codeproject.com/KB/cpp/unicode_console_output.aspx
On Tue, Nov 2, 2010 at 3:49 AM, Krasimir
On Mon, Nov 1, 2010 at 10:20 PM, David Sankel cam...@gmail.com wrote:
Hello all,
I'm attempting to output some Unicode on the windows console. I set my
windows console code page to utf-8 using chcp 65001.
The program:
-- Test.hs
main = putStr λ.x→x
The output of `runghc Test.hs`:
On Saturday 11 September 2010 03:12:11, Greg wrote:
If I read the Haskell Report correctly, operators are named by (symbol
{symbol | : }), where symbol is either an ascii symbol (including *) or
a unicode symbol (defined as any Unicode symbol or punctuation). I'm
pretty sure º is a unicode
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 9/10/10 21:39 , Daniel Fischer wrote:
On Saturday 11 September 2010 03:12:11, Greg wrote:
a unicode symbol (defined as any Unicode symbol or punctuation). I'm
pretty sure º is a unicode symbol or punctuation.
Prelude Data.Char
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 9/10/10 21:12 , Greg wrote:
unicode symbol (defined as any Unicode symbol or punctuation). I'm pretty
sure º is a unicode symbol or punctuation.
No, it's a raised lowercase o used by convention to indicate gender of
abbreviated ordinals. You
Oh cripe... Yet another reason not to use funny symbols-- even the developer can't tell them apart!Yeah, I wanted a degree sign, but if it's all that subtle then I should probably reconsider the whole idea.On the positive side, I know what ª is for now so today wasn't a complete waste.
On Wed, Apr 21, 2010 at 12:51 AM, Yitzchak Gale g...@sefer.org wrote:
Yes, sorry. Either use TWO DOT LEADER, or remove
this Unicode alternative altogether
(i.e. leave it the way it is *without* the UnicodeSyntax extension).
I'm happy with either of those. I just don't like moving the dots
up
I wrote:
My opinion is that we should either use TWO DOT LEADER,
or just leave it as it is now, two FULL STOP characters.
Simon Marlow wrote:
Just to be clear, you're suggesting *removing* the Unicode alternative for
'..' from GHC's UnicodeSyntax extension?
Yes, sorry. Either use TWO DOT
On 15/04/2010 18:12, Yitzchak Gale wrote:
My opinion is that we should either use TWO DOT LEADER,
or just leave it as it is now, two FULL STOP characters.
Just to be clear, you're suggesting *removing* the Unicode alternative
for '..' from GHC's UnicodeSyntax extension?
I have no strong
I think the baseline ellipsis makes much more sense; it's
hard to see how the midline ellipsis was chosen.
--
Jason Dusek
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
My opinion is that we should either use TWO DOT LEADER,
or just leave it as it is now, two FULL STOP characters.
Two dots indicating a range is not the same symbol
as a three dot ellipsis.
Traditional non-Unicode Haskell will continue to be
around for a long time to come. It would be very
That is very interesting. I didn't know the history of those characters.
If we can't find a Unicode character that everyone agrees upon,
I also don't see any problem with leaving it as two FULL STOP
characters.
I agree. I don't like the current Unicode variant for .., therefore
I suggested an
On 14 January 2005 12:58, Dimitry Golubovsky wrote:
Now I need more advice on which flavor of Unicode support to
implement. In Haskell-cafe, there were 3 flavors summarized: I am
reposting the table here (its latest version).
|Sebastien's| Marcin's | Hugs
Hi,
Simon Marlow wrote:
You're doing fine - but a better place for the tables is as part of the
base package, rather than the RTS. We already have some C files in the
base package: see libraries/base/cbits, for example. I suggest just
putting your code in there.
I have done that - now GHCi
On 11 January 2005 02:29, Dimitry Golubovsky wrote:
Bad thing is, LD_PRELOAD does not work on all systems. So I tried to
put the code directly into the runtime (where I believe it should be;
the Unicode properties table is packed, and won't eat much space). I
renamed foreign function names in
Dylan Thurston [EMAIL PROTECTED] writes:
Right. In Unicode, the concept of a character is not really so
useful;
After reading a bit about it, I'm certainly confused.
Unicode/ISO-10646 contains a lot of things that aren'r really one
character, e.g. ligatures.
most functions that
- Original Message -
From: Ketil Malde [EMAIL PROTECTED]
To: Dylan Thurston [EMAIL PROTECTED]
Cc: Andrew J Bromage [EMAIL PROTECTED]; [EMAIL PROTECTED];
[EMAIL PROTECTED]
Sent: Monday, October 08, 2001 9:02 AM
Subject: Re: UniCode
(The spelling is 'Unicode' (and none other).)
Dylan
G'day all.
On Fri, Oct 05, 2001 at 06:17:26PM +, Marcin 'Qrczak' Kowalczyk wrote:
This information is out of date. AFAIR about 4 of them is assigned.
Most for Chinese (current, not historic).
I wasn't aware of this. Last time I looked was Unicode 3.0. Thanks
for the update.
In
Marcin 'Qrczak' Kowalczyk [EMAIL PROTECTED] writes:
Fri, 5 Oct 2001 02:29:51 -0700 (PDT), Krasimir Angelov [EMAIL PROTECTED] pisze:
Why Char is 32 bit. UniCode characters is 16 bit.
No, Unicode characters have 21 bits (range U+..10).
We've been through all this, of course, but
G'day all.
On Fri, Oct 05, 2001 at 02:29:51AM -0700, Krasimir Angelov wrote:
Why Char is 32 bit. UniCode characters is 16 bit.
It's not quite as simple as that. There is a set of one million
(more correctly, 1M) Unicode characters which are only accessible
using surrogate pairs (i.e. two
Fri, 5 Oct 2001 23:23:50 +1000, Andrew J Bromage [EMAIL PROTECTED] pisze:
There is a set of one million (more correctly, 1M) Unicode characters
which are only accessible using surrogate pairs (i.e. two UTF-16
codes). There are currently none of these codes assigned,
This information is out
05 Oct 2001 14:35:17 +0200, Ketil Malde [EMAIL PROTECTED] pisze:
Does Haskell's support of Unicode mean UTF-32, or full UCS-4?
It's not decided officially. GHC uses UTF-32. It's expected that
UCS-4 will vanish and ISO-10646 will be reduced to the same range
U+..10 as Unicode.
--
__(
Manuel M. T. Chakravarty writes:
The problem with restricting youself to the Jouyou-Kanji is
that you have a hard time with names (of persons and
places). Many exotic and otherwise unused Kanji are used in
names (for historical reasons) and as the Kanji
representation of a name is the
Marcin 'Qrczak' Kowalczyk wrote:
As for the language standard: I hope that Char will be allowed or
required to have =30 bits instead of current 16; but never more than
Int, to be able to use ord and chr safely.
Er does it have to? The Java Virtual Machine implements Unicode with
16 bits.
OTOH, it wouldn't be hard to change GHC's Char datatype to be a
full 32-bit integral data type.
Could we do it please?
It will not break anything if done slowly. I imagine that
{read,write}CharOffAddr and _ccall_ will still use only 8 bits of
Char. But after Char is wide, libraries
George Russell writes:
Marcin 'Qrczak' Kowalczyk wrote:
As for the language standard: I hope that Char will be allowed or
required to have =30 bits instead of current 16; but never more than
Int, to be able to use ord and chr safely.
Er does it have to? The Java Virtual Machine
Tue, 16 May 2000 10:44:28 +0200, George Russell [EMAIL PROTECTED] pisze:
As for the language standard: I hope that Char will be allowed or
required to have =30 bits instead of current 16; but never more than
Int, to be able to use ord and chr safely.
Er does it have to? The Java
Tue, 16 May 2000 12:26:12 +0200 (MET DST), Frank Atanassow [EMAIL PROTECTED] pisze:
Of course, you can always come up with specialized schemes involving stateful
encodings and/or "block-swapping" (using the Unicode private-use areas, for
example), but then, that subverts the purpose of
Frank Atanassow [EMAIL PROTECTED] wrote,
George Russell writes:
Marcin 'Qrczak' Kowalczyk wrote:
As for the language standard: I hope that Char will be allowed or
required to have =30 bits instead of current 16; but never more than
Int, to be able to use ord and chr safely.
Er
34 matches
Mail list logo