Re: [Haskell-cafe] Installation of GLFW package

2007-10-02 Thread Peter Verswyvelen

Hi Immanuel,

Are you using Windows or Linux or OSX?

Under Windows, I was able to install it (no need to compile, since the 
libraries are bundled).


I could also run some examples from the SOE book, however IMO the GLFW 
Haskell wrapper seems to contain some bugs (at least on Windows) that 
prevented some demos from working. I locally fixed the GLFW.hs file so 
that all seems to work.


But you don't seem to get to that step yet... I had the error you 
described once when installing another package that had an incomplete 
cabal file, but that can't be the case here I guess...


If you just want to try SOE, you can also try GTK2HS 
http://haskell.org/gtk2hs http://haskell.org/gtk2hs/ (note you'll have 
to replace each import SOE line with import Graphics.SOE.Gtk)


Cheers,
Peter

Immanuel Normann wrote:

Hello,

I have just read the thread about installation of GLUT package started
at 9/3/2007 by Ronald Guida. Installation of the GLFW package is very
much related to that. However, I haven't found the solution for
installing the GLFW successsfully.

I have downloaded 


http://www.cs.yale.edu/homes/hl293/download/GLFW-20070804.zip

and followed the compile and installation instructions from the
README.txt file. Actually I haven't noticed any error messages.

Then I tried SimpleGraphics from SOE (downloaded from
http://www.cs.yale.edu/homes/hl293/download/SOE-20070830.zip)
with ghci-6.6. Again loading the file did not raise any error.

But when I tried to start 


main
  = runGraphics (
do w - openWindow 
  My First Graphics Program (300,300)

   drawInWindow w (text (100,200) Hello Graphics World)
   k - getKey w
   closeWindow w
)

I got this error message:

During interactive linking, GHCi couldn't find the following symbol:
  GLFWzm0zi1_GraphicsziUIziGLFW_initializze_closure
This may be due to you not asking GHCi to load extra object files,
archives or DLLs needed by your current session.  Restart GHCi,
specifying
the missing library using the -L/path/to/object/dir and -lmissinglibname
flags, or simply by naming the relevant files on the GHCi command line.
Alternatively, this link failure might indicate a bug in GHCi.

Here I am lost. What is the missing library?

Regards,
Immanuel

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Unfriendly hs-plugins error

2007-10-02 Thread Björn Buckwalter
That helped, thanks!


On 10/2/07, jeeva suresh [EMAIL PROTECTED] wrote:
 I had a similar problem, I solved it by using the development version of
 hs-plugins (ie. darcs get --set-scripts-executable
 http://www.cse.unsw.edu.au/~dons/code/hs-plugins)


 On 02/10/2007, Brandon S. Allbery KF8NH [EMAIL PROTECTED] wrote:
 
 
  On Oct 1, 2007, at 21:59 , Björn Buckwalter wrote:
 
   Dear all,
  
   I'm getting a rather unfriendly error when trying to load a plugin
   with hs-plugins:
  
   my_program: Ix{Int}.index: Index (65536) out of range ((0,7))
 
  This tends to mean that hs-plugins doesn't understand the format of
  the .hi (compiled haskell module interface) file, which happens every
  time a new ghc is released.  Likely it's not been ported to work with
  6.6.1 yet.
 
  --
  brandon s. allbery [solaris,freebsd,perl,pugs,haskell]
 [EMAIL PROTECTED]
  system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
  electrical and computer engineering, carnegie mellon universityKF8NH
 
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
 
 http://www.haskell.org/mailman/listinfo/haskell-cafe
 



 --
 -Jeeva

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: PROPOSAL: New efficient Unicode string library.

2007-10-02 Thread ChrisK
Deborah Goldsmith wrote:

 UTF-16 is the native encoding used for Cocoa, Java, ICU, and Carbon, and
 is what appears in the APIs for all of them. UTF-16 is also what's
 stored in the volume catalog on Mac disks. UTF-8 is only used in BSD
 APIs for backward compatibility. It's also used in plain text files (or
 XML or HTML), again for compatibility.
 
 Deborah


On OS X, Cocoa and Carbon use Core Foundation, whose API does not have a
one-true-encoding internally.  Follow the rather long URL for details:

http://developer.apple.com/documentation/CoreFoundation/Conceptual/CFStrings/index.html?http://developer.apple.com/documentation/CoreFoundation/Conceptual/CFStrings/Articles/StringStorage.html#//apple_ref/doc/uid/20001179

I would vote for an API that not just hides the internal store, but allows
different internal stores to be used in a mostly compatible way.

However, There is a UniChar typedef on OS X which is the same unsigned 16 bit
integer as Java's JNI would use.

-- 
Chris

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Parsing R5RS Scheme with Parsec

2007-10-02 Thread Alex Queiroz
Hallo,

 For fun and learning I'm trying to parse R5RS Scheme with Parsec.
The code to parse lists follows:

--
-- Lists
--

parseLeftList :: Parser [SchDatum]
parseLeftList = do
  char '('
  many parseDatum = return . filter (/=  SchAtmosphere)

parseDottedList :: [SchDatum] - Parser SchDatum
parseDottedList ls = do
  char '.'
  many1 parseAtmosphere
  d - parseDatum
  many parseAtmosphere
  char ')'
  return $ SchDottedList ls d

parseProperList :: [SchDatum] - Parser SchDatum
parseProperList ls = do
  char ')'
  return $ SchList ls

parseList :: Parser SchDatum
parseList = do
  ls - parseLeftList
  (parseDottedList ls) | (parseProperList ls)

 I've factored out the common left sub-expression in
parseLeftList. The problem is that ... is a valid identifier so when
inside the left of the list the parser sees a single dot, it tries to
match it with ..., which fails. Can anybody give advice on how to
rewrite these list parsing functions?

Cheers,
-- 
-alex
http://www.ventonegro.org/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsing R5RS Scheme with Parsec

2007-10-02 Thread Brandon S. Allbery KF8NH


On Oct 2, 2007, at 9:52 , Alex Queiroz wrote:


  (parseDottedList ls) | (parseProperList ls)

 I've factored out the common left sub-expression in
parseLeftList. The problem is that ... is a valid identifier so when
inside the left of the list the parser sees a single dot, it tries to
match it with ..., which fails. Can anybody give advice on how to
rewrite these list parsing functions?


  try (parseDottedList ls) | parseProperList ls

Overuse of try is a bad idea because it's slow, but sometimes it's  
the only way to go; it insures backtracking in cases like this.


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsing R5RS Scheme with Parsec

2007-10-02 Thread Alex Queiroz
Hallo,

On 10/2/07, Brandon S. Allbery KF8NH [EMAIL PROTECTED] wrote:

 On Oct 2, 2007, at 9:52 , Alex Queiroz wrote:

(parseDottedList ls) | (parseProperList ls)
 
   I've factored out the common left sub-expression in
  parseLeftList. The problem is that ... is a valid identifier so when
  inside the left of the list the parser sees a single dot, it tries to
  match it with ..., which fails. Can anybody give advice on how to
  rewrite these list parsing functions?

try (parseDottedList ls) | parseProperList ls

 Overuse of try is a bad idea because it's slow, but sometimes it's
 the only way to go; it insures backtracking in cases like this.


 This does not work. The parser chokes in parseLeftList, because
it finds a single dot which is not the beginning of 

Cheers,
-- 
-alex
http://www.ventonegro.org/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsing R5RS Scheme with Parsec

2007-10-02 Thread Brandon S. Allbery KF8NH


On Oct 2, 2007, at 10:36 , Alex Queiroz wrote:


 This does not work. The parser chokes in parseLeftList, because
it finds a single dot which is not the beginning of 


Sorry, just woke up and still not quite tracking right, so I modified  
the wrong snippet of code.  The trick is to wrap parseLeftList in a  
try, so the parser retries the next alternative when it fails.


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsing R5RS Scheme with Parsec

2007-10-02 Thread Alex Queiroz
Hallo,

On 10/2/07, Brandon S. Allbery KF8NH [EMAIL PROTECTED] wrote:

 Sorry, just woke up and still not quite tracking right, so I modified
 the wrong snippet of code.  The trick is to wrap parseLeftList in a
 try, so the parser retries the next alternative when it fails.


 Since ... can only appear at the end of a list, I removed ...
from the possible symbols and added a new function:

parseThreeDottedList :: [SchDatum] - Parser SchDatum
parseThreeDottedList ls = do
  string ...
  many parseAtmosphere
  char ')'
  return $ SchList $ ls ++ [SchSymbol ...]

parseList :: Parser SchDatum
parseList = do
  ls - parseLeftList
  try (parseThreeDottedList ls) | (parseDottedList ls) |
(parseProperList ls)

 Thanks for the help.

Cheers,
-- 
-alex
http://www.ventonegro.org/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: PROPOSAL: New efficient Unicode string library.

2007-10-02 Thread Deborah Goldsmith

On Oct 2, 2007, at 5:11 AM, ChrisK wrote:

Deborah Goldsmith wrote:

UTF-16 is the native encoding used for Cocoa, Java, ICU, and  
Carbon, and

is what appears in the APIs for all of them. UTF-16 is also what's
stored in the volume catalog on Mac disks. UTF-8 is only used in BSD
APIs for backward compatibility. It's also used in plain text  
files (or

XML or HTML), again for compatibility.

Deborah



On OS X, Cocoa and Carbon use Core Foundation, whose API does not  
have a

one-true-encoding internally.  Follow the rather long URL for details:

http://developer.apple.com/documentation/CoreFoundation/Conceptual/ 
CFStrings/index.html?http://developer.apple.com/documentation/ 
CoreFoundation/Conceptual/CFStrings/Articles/StringStorage.html#// 
apple_ref/doc/uid/20001179


I would vote for an API that not just hides the internal store, but  
allows

different internal stores to be used in a mostly compatible way.

However, There is a UniChar typedef on OS X which is the same  
unsigned 16 bit

integer as Java's JNI would use.


UTF-16 is the type used in all the APIs. Everything else is  
considered an encoding conversion.


CoreFoundation uses UTF-16 internally except when the string fits  
entirely in a single-byte legacy encoding like MacRoman or  
MacCyrillic. If any kind of Unicode processing needs to be done to  
the string, it is first coerced to UTF-16. If it weren't for  
backwards compatibility issues, I think we'd use UTF-16 all the time  
as the machinery for switching encodings adds complexity. I wouldn't  
advise it for a new library.


Deborah

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Installation of GLFW package

2007-10-02 Thread Paul L
It seems like the GLFW C binaries wasn't included in your GLFW Haskell
module installed. Did you do the last step by running install.bat or
install.sh instead of runhaskell Setup install?

Regards,
Paul Liu

On 10/2/07, Immanuel Normann [EMAIL PROTECTED] wrote:
 Hello,

 I have just read the thread about installation of GLUT package started
 at 9/3/2007 by Ronald Guida. Installation of the GLFW package is very
 much related to that. However, I haven't found the solution for
 installing the GLFW successsfully.

 I have downloaded

 http://www.cs.yale.edu/homes/hl293/download/GLFW-20070804.zip

 and followed the compile and installation instructions from the
 README.txt file. Actually I haven't noticed any error messages.

 Then I tried SimpleGraphics from SOE (downloaded from
 http://www.cs.yale.edu/homes/hl293/download/SOE-20070830.zip)
 with ghci-6.6. Again loading the file did not raise any error.

 But when I tried to start

 main
   = runGraphics (
 do w - openWindow
   My First Graphics Program (300,300)
drawInWindow w (text (100,200) Hello Graphics World)
k - getKey w
closeWindow w
 )

 I got this error message:

 During interactive linking, GHCi couldn't find the following symbol:
   GLFWzm0zi1_GraphicsziUIziGLFW_initializze_closure
 This may be due to you not asking GHCi to load extra object files,
 archives or DLLs needed by your current session.  Restart GHCi,
 specifying
 the missing library using the -L/path/to/object/dir and -lmissinglibname
 flags, or simply by naming the relevant files on the GHCi command line.
 Alternatively, this link failure might indicate a bug in GHCi.

 Here I am lost. What is the missing library?

 Regards,
 Immanuel

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: PROPOSAL: New efficient Unicode string library.

2007-10-02 Thread Jonathan Cast
On Tue, 2007-10-02 at 08:02 -0700, Deborah Goldsmith wrote:
 On Oct 2, 2007, at 5:11 AM, ChrisK wrote:
  Deborah Goldsmith wrote:
 
  UTF-16 is the native encoding used for Cocoa, Java, ICU, and  
  Carbon, and
  is what appears in the APIs for all of them. UTF-16 is also what's
  stored in the volume catalog on Mac disks. UTF-8 is only used in BSD
  APIs for backward compatibility. It's also used in plain text  
  files (or
  XML or HTML), again for compatibility.
 
  Deborah
 
 
  On OS X, Cocoa and Carbon use Core Foundation, whose API does not  
  have a
  one-true-encoding internally.  Follow the rather long URL for details:
 
  http://developer.apple.com/documentation/CoreFoundation/Conceptual/ 
  CFStrings/index.html?http://developer.apple.com/documentation/ 
  CoreFoundation/Conceptual/CFStrings/Articles/StringStorage.html#// 
  apple_ref/doc/uid/20001179
 
  I would vote for an API that not just hides the internal store, but  
  allows
  different internal stores to be used in a mostly compatible way.
 
  However, There is a UniChar typedef on OS X which is the same  
  unsigned 16 bit
  integer as Java's JNI would use.
 
 UTF-16 is the type used in all the APIs. Everything else is  
 considered an encoding conversion.
 
 CoreFoundation uses UTF-16 internally except when the string fits  
 entirely in a single-byte legacy encoding like MacRoman or  
 MacCyrillic. If any kind of Unicode processing needs to be done to  
 the string, it is first coerced to UTF-16. If it weren't for  
 backwards compatibility issues, I think we'd use UTF-16 all the time  
 as the machinery for switching encodings adds complexity. I wouldn't  
 advise it for a new library.

I would like to, again, strongly argue against sacrificing compatibility
with Linux/BSD/etc. for the sake of compatibility with OS X or Windows.
FFI bindings have to convert data formats in any case; Haskell shouldn't
gratuitously break Linux support (or make life harder on Linux) just to
support proprietary operating systems better.

Now, if /independent of the details of MacOS X/, UTF-16 is better
(objectively), it can be converted to anything by the FFI.  But doing it
the way Java or MacOS X or Win32 or anyone else does it, at the expense
of Linux, I am strongly opposed to.

jcc


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Using C-c C-l to load .hsc file into ghci in Emacs

2007-10-02 Thread Johan Tibell
How can I get C-c C-l to first run cpp on a .hsc file and then load
the .hs file?

I checked out the network package from darcs and then did:

Start ghci, C-c C-z, then:

Prelude :set -cpp

And then pressed load, C-c C-l:

Prelude :load /Users/tibell/src/haskell/network-bytestring/Network/Socket.hsc

command line:
Could not find module
`/Users/tibell/src/haskell/network-bytestring/Network/Socket.hsc':
  Use -v to see a list of the files searched for.
Failed, modules loaded: none.
Prelude :load /Users/tibell/src/haskell/network-bytestring/Network/Socket.hsc

command line:
Could not find module
`/Users/tibell/src/haskell/network-bytestring/Network/Socket.hsc':
  Use -v to see a list of the files searched for.
Failed, modules loaded: none.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: PROPOSAL: New efficient Unicode string library.

2007-10-02 Thread Miguel Mitrofanov
I would like to, again, strongly argue against sacrificing  
compatibility
with Linux/BSD/etc. for the sake of compatibility with OS X or  
Windows.


Ehm? I've used to think MacOS is a sort of BSD...
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsing R5RS Scheme with Parsec

2007-10-02 Thread Ryan Ingram
I don't know if this applies to Scheme parsing, but I find it's often
helpful to introduce a tokenizer into the parser to centralize the use
of try to one place::

type Token = String

tokRaw :: Parser Token
tokRaw = {- implement this yourself depending on language spec -}

tok :: Parser Token
tok = do
t - tokRaw
many spaces
return t

-- wrap your outside parser with this; it gets the tokenizer
-- started because we only handle eating spaces after tokens,
-- not before
startParser :: Parser a - Parser a
startParser a = many spaces  a

sat :: (Token - Maybe a) - Parser a
sat f = try $ do
t - tok
case f t of
Nothing - mzero
Just a - return a

lit :: Token - Parser Token
lit s = sat (test s) ? s where
   test s t = if (s == t) then Just s else Nothing

Now if you replace uses of string and char in your code with
lit, you avoid the problem of parses failing because they consumed
some input from the wrong token type before failing.

On 10/2/07, Alex Queiroz [EMAIL PROTECTED] wrote:
 Hallo,

 On 10/2/07, Brandon S. Allbery KF8NH [EMAIL PROTECTED] wrote:
 
  Sorry, just woke up and still not quite tracking right, so I modified
  the wrong snippet of code.  The trick is to wrap parseLeftList in a
  try, so the parser retries the next alternative when it fails.
 

 Since ... can only appear at the end of a list, I removed ...
 from the possible symbols and added a new function:

 parseThreeDottedList :: [SchDatum] - Parser SchDatum
 parseThreeDottedList ls = do
  string ...
  many parseAtmosphere
  char ')'
  return $ SchList $ ls ++ [SchSymbol ...]

 parseList :: Parser SchDatum
 parseList = do
  ls - parseLeftList
  try (parseThreeDottedList ls) | (parseDottedList ls) |
 (parseProperList ls)

 Thanks for the help.

 Cheers,
 --
 -alex
 http://www.ventonegro.org/
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: PROPOSAL: New efficient Unicode string library.

2007-10-02 Thread Jonathan Cast
On Tue, 2007-10-02 at 22:05 +0400, Miguel Mitrofanov wrote:
  I would like to, again, strongly argue against sacrificing  
  compatibility
  with Linux/BSD/etc. for the sake of compatibility with OS X or  
  Windows.
 
 Ehm? I've used to think MacOS is a sort of BSD...

Cocoa, then.

jcc


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: PROPOSAL: New efficient Unicode string library.

2007-10-02 Thread Stefan O'Rear
On Tue, Oct 02, 2007 at 08:02:30AM -0700, Deborah Goldsmith wrote:
 UTF-16 is the type used in all the APIs. Everything else is considered an 
 encoding conversion.

 CoreFoundation uses UTF-16 internally except when the string fits entirely 
 in a single-byte legacy encoding like MacRoman or MacCyrillic. If any kind 
 of Unicode processing needs to be done to the string, it is first coerced 
 to UTF-16. If it weren't for backwards compatibility issues, I think we'd 
 use UTF-16 all the time as the machinery for switching encodings adds 
 complexity. I wouldn't advise it for a new library.

I do not believe that anyone was seriously advocating multiple blessed
encodings.  The main question is *which* encoding to bless.  99+% of
text I encounter is in US-ASCII, so I would favor UTF-8.  Why is UTF-16
better for me?

Stefan


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsing R5RS Scheme with Parsec

2007-10-02 Thread Stefan O'Rear
On Tue, Oct 02, 2007 at 11:36:52AM -0300, Alex Queiroz wrote:
 Hallo,
 
 On 10/2/07, Brandon S. Allbery KF8NH [EMAIL PROTECTED] wrote:
 
  On Oct 2, 2007, at 9:52 , Alex Queiroz wrote:
 
 (parseDottedList ls) | (parseProperList ls)
  
I've factored out the common left sub-expression in
   parseLeftList. The problem is that ... is a valid identifier so when
   inside the left of the list the parser sees a single dot, it tries to
   match it with ..., which fails. Can anybody give advice on how to
   rewrite these list parsing functions?
 
 try (parseDottedList ls) | parseProperList ls
 
  Overuse of try is a bad idea because it's slow, but sometimes it's
  the only way to go; it insures backtracking in cases like this.
 
 
  This does not work. The parser chokes in parseLeftList, because
 it finds a single dot which is not the beginning of 

I suggest left-factoring.

parseThingyOrEOL =
 (space  parseThingyOrEOL)
 | (fmap Left parseAtom)
 | (char '.'  parseThingyOrEOL = \(Left x) - Right x)
 | (char ')'  return (Right nil))
 | (char '('  fmap Left (fix (\ plist - do obj - parseThingyOrEOL
case obj of Left x  - fmap 
(Cons x) plist
Right x - return 
x)))

etc.

Stefan


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: PROPOSAL: New efficient Unicode string library.

2007-10-02 Thread Johan Tibell
 I do not believe that anyone was seriously advocating multiple blessed
 encodings.  The main question is *which* encoding to bless.  99+% of
 text I encounter is in US-ASCII, so I would favor UTF-8.  Why is UTF-16
 better for me?

All software I write professional have to support 40 languages
(including CJK ones) so I would prefer UTF-16 in case I could use
Haskell at work some day in the future. I dunno that who uses what
encoding the most is good grounds to pick encoding though. Ease of
implementation and speed on some representative sample set of text may
be.

-- Johan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Installation of GLFW package

2007-10-02 Thread Immanuel Normann
Am Dienstag, den 02.10.2007, 23:18 +0800 schrieb Paul L:
 It seems like the GLFW C binaries wasn't included in your GLFW Haskell
 module installed. Did you do the last step by running install.bat or
 install.sh instead of runhaskell Setup install?

now it works! ... although I am unfortunately not able to reconstruct
what I did to make it running.

Thank you,
Immanuel
 
 Regards,
 Paul Liu
 
 On 10/2/07, Immanuel Normann [EMAIL PROTECTED] wrote:
  Hello,
 
  I have just read the thread about installation of GLUT package started
  at 9/3/2007 by Ronald Guida. Installation of the GLFW package is very
  much related to that. However, I haven't found the solution for
  installing the GLFW successsfully.
 
  I have downloaded
 
  http://www.cs.yale.edu/homes/hl293/download/GLFW-20070804.zip
 
  and followed the compile and installation instructions from the
  README.txt file. Actually I haven't noticed any error messages.
 
  Then I tried SimpleGraphics from SOE (downloaded from
  http://www.cs.yale.edu/homes/hl293/download/SOE-20070830.zip)
  with ghci-6.6. Again loading the file did not raise any error.
 
  But when I tried to start
 
  main
= runGraphics (
  do w - openWindow
My First Graphics Program (300,300)
 drawInWindow w (text (100,200) Hello Graphics World)
 k - getKey w
 closeWindow w
  )
 
  I got this error message:
 
  During interactive linking, GHCi couldn't find the following symbol:
GLFWzm0zi1_GraphicsziUIziGLFW_initializze_closure
  This may be due to you not asking GHCi to load extra object files,
  archives or DLLs needed by your current session.  Restart GHCi,
  specifying
  the missing library using the -L/path/to/object/dir and -lmissinglibname
  flags, or simply by naming the relevant files on the GHCi command line.
  Alternatively, this link failure might indicate a bug in GHCi.
 
  Here I am lost. What is the missing library?
 
  Regards,
  Immanuel
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: PROPOSAL: New efficient Unicode string library.

2007-10-02 Thread Stefan O'Rear
On Tue, Oct 02, 2007 at 11:05:38PM +0200, Johan Tibell wrote:
  I do not believe that anyone was seriously advocating multiple blessed
  encodings.  The main question is *which* encoding to bless.  99+% of
  text I encounter is in US-ASCII, so I would favor UTF-8.  Why is UTF-16
  better for me?
 
 All software I write professional have to support 40 languages
 (including CJK ones) so I would prefer UTF-16 in case I could use
 Haskell at work some day in the future. I dunno that who uses what
 encoding the most is good grounds to pick encoding though. Ease of
 implementation and speed on some representative sample set of text may
 be.

UTF-8 supports CJK languages too.  The only question is efficiency, and
I believe CJK is still a relatively uncommon case compared to English
and other Latin-alphabet languages.  (That said, I live in a country all
of whose dominant languages use the Latin alphabet)

Stefan


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: PROPOSAL: New efficient Unicode string library.

2007-10-02 Thread Deborah Goldsmith

On Oct 2, 2007, at 8:44 AM, Jonathan Cast wrote:
I would like to, again, strongly argue against sacrificing  
compatibility
with Linux/BSD/etc. for the sake of compatibility with OS X or  
Windows.
FFI bindings have to convert data formats in any case; Haskell  
shouldn't
gratuitously break Linux support (or make life harder on Linux) just  
to

support proprietary operating systems better.

Now, if /independent of the details of MacOS X/, UTF-16 is better
(objectively), it can be converted to anything by the FFI.  But  
doing it
the way Java or MacOS X or Win32 or anyone else does it, at the  
expense

of Linux, I am strongly opposed to.


No one is advocating that. Any Unicode support library needs to  
support exporting text as UTF-8 since it's so widely used. It's used  
on Mac OS X, too, in exactly the same contexts it would be used on  
Linux. However, UTF-8 is a poor choice for internal representation.


On Oct 2, 2007, at 2:32 PM, Stefan O'Rear wrote:
UTF-8 supports CJK languages too.  The only question is efficiency,  
and

I believe CJK is still a relatively uncommon case compared to English
and other Latin-alphabet languages.  (That said, I live in a country  
all

of whose dominant languages use the Latin alphabet)


First of all, non-Latin countries already represent a large fraction  
of computer usage and the computer market. It is not at all  
relatively uncommon. Japan alone is a huge market. China is a huge  
market.


Second, it's not just CJK, but anything that's not mostly ASCII.  
Russian, Greek, Thai, Arabic, Hebrew, etc. etc. etc. UTF-8 is intended  
for compatibility with existing software that expects multibyte  
encodings. It doesn't work well as an internal representation. Again,  
no one is saying a Unicode library shouldn't have full support for  
input and output of UTF-8 (and other encodings).


If you want to process ASCII text and squeeze out every last ounce of  
performance, use byte strings. Unicode strings should be optimized for  
representing and processing human language text, a large share of  
which is not in the Latin alphabet.


Remember, speakers of English and other Latin-alphabet languages are a  
minority in the world, though not in the computer-using world. Yet.


Deborah

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: PROPOSAL: New efficient Unicode string library.

2007-10-02 Thread Deborah Goldsmith

On Oct 2, 2007, at 3:01 PM, Twan van Laarhoven wrote:

Lots of people wrote:
 I want a UTF-8 bikeshed!
 No, I want a UTF-16 bikeshed!

What the heck does it matter what encoding the library uses  
internally? I expect the interface to be something like (from my own  
CompactString library):

 fromByteString :: Encoding - ByteString - UnicodeString
 toByteString   :: Encoding - UnicodeString - ByteString


I agree, from an API perspective the internal encoding doesn't matter.


The only matter is efficiency for a particular encoding.


This matters a lot.



I would suggest that we get a working library first. Either UTF-8 or  
UTF-16 will do, as long as it works.


Even better would be to implement both (and perhaps more encodings),  
and then benchmark them to get a sensible default. Then the choice  
can be made available to the user as well, in case someone has  
specifix needs. But again: get it working first!


The problem is that the internal encoding can have a big effect on the  
implementation of the library. It's better not to have to do it over  
again if the first choice is not optimal.


I'm just trying to share the experience of the Unicode Consortium, the  
ICU library contributors, and Apple, with the Haskell community. They,  
and I personally, have many years of experience implementing support  
for Unicode.


Anyway, I think we're starting to repeat ourselves...

Deborah

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: PROPOSAL: New efficient Unicode string library.

2007-10-02 Thread Twan van Laarhoven

Lots of people wrote:
 I want a UTF-8 bikeshed!
 No, I want a UTF-16 bikeshed!

What the heck does it matter what encoding the library uses internally? 
I expect the interface to be something like (from my own CompactString 
library):

 fromByteString :: Encoding - ByteString - UnicodeString
 toByteString   :: Encoding - UnicodeString - ByteString
The only matter is efficiency for a particular encoding.

I would suggest that we get a working library first. Either UTF-8 or 
UTF-16 will do, as long as it works.


Even better would be to implement both (and perhaps more encodings), and 
then benchmark them to get a sensible default. Then the choice can be 
made available to the user as well, in case someone has specifix needs. 
But again: get it working first!


Twan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] build problems with hscurses

2007-10-02 Thread brad clawsie
hoping someone here can help me with a build problem

when going through the hackage build stage, i get numerous errors
like:

HSCurses/Curses.hs::0:  invalid preprocessing directive #def

where  ranges in lines from 1597 to 1621. 

is there a special directive i need for runhaskell? any info
appreciated. 

(on freebsd6.2, ghc6.6.1, ncurses-5.6_1)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



[Haskell-cafe] bizarre memory usage with data.binary

2007-10-02 Thread Anatoly Yakovenko
i am getting some weird memory usage out of this program:


module Main where

import Data.Binary
import Data.List(foldl')


main = do
   let sum' = foldl' (+) 0
   let list::[Int] = decode $ encode $ ([1..] :: [Int])
   print $ sum' list
   print done

it goes up to 500M and down to 17M on windows.  Its build with ghc
6.6.1 with the latest data.binary

Any ideas what could be causing the memory usage to jump around so much?


Thanks,
Anatoly
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: PROPOSAL: New efficient Unicode string library.

2007-10-02 Thread Jonathan Cast
On Wed, 2007-10-03 at 00:01 +0200, Twan van Laarhoven wrote:
 Lots of people wrote:
   I want a UTF-8 bikeshed!
   No, I want a UTF-16 bikeshed!
 
 What the heck does it matter what encoding the library uses internally? 

+1

jcc


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] bizarre memory usage with data.binary

2007-10-02 Thread Roel van Dijk
Does it terminate?

Looks like you are summing all the natural numbers. On a turing
machine it should run forever,   on a real computer it should run out
of memory. Unless I am missing something obvious :-)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] bizarre memory usage with data.binary

2007-10-02 Thread Stefan O'Rear
On Wed, Oct 03, 2007 at 01:22:25AM +0200, Roel van Dijk wrote:
 Does it terminate?
 
 Looks like you are summing all the natural numbers. On a turing
 machine it should run forever,   on a real computer it should run out
 of memory. Unless I am missing something obvious :-)

There are only about 4 billion distinct values of type Int, 2 billion of
which are positive.  Integer is required for bigger values

Stefan


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] bizarre memory usage with data.binary

2007-10-02 Thread Stefan O'Rear
On Tue, Oct 02, 2007 at 04:08:01PM -0700, Anatoly Yakovenko wrote:
 i am getting some weird memory usage out of this program:
 
 
 module Main where
 
 import Data.Binary
 import Data.List(foldl')
 
 
 main = do
let sum' = foldl' (+) 0
let list::[Int] = decode $ encode $ ([1..] :: [Int])
print $ sum' list
print done
 
 it goes up to 500M and down to 17M on windows.  Its build with ghc
 6.6.1 with the latest data.binary
 
 Any ideas what could be causing the memory usage to jump around so much?

Only 500M?  encode for lists is strict, I would have expected around
80GB usage...  What does -ddump-simpl-stats say?

Stefan


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: bizarre memory usage with data.binary

2007-10-02 Thread Anatoly Yakovenko
Program1:

module Main where

import Data.Binary
import Data.List(foldl')


main = do
  let sum' = foldl' (+) 0
  let list::[Int] = decode $ encode $ ([1..] :: [Int])
  print $ sum' list
  print done

vs

Program2:

module Main where

import Data.Binary
import Data.List(foldl')


main = do
  let sum' = foldl' (+) 0
  let list::[Int] = [1..]
  print $ sum' list
  print done

neither program is expected to terminate.  The point of these examples
is to demonstrate that Data.Binary encode and decode have some strange
memory allocation patters.

If you run Program1, it will run forever, but its memory usage on my
machine goes to 500M then back down to 17M then back up to 500M then
back down to 17M... repeatedly.  I don't think this has anything to do
with running out of space in a 32 bit integer.

Program2 on the other hand runs at constant memory at around 2M.

Anatoly

On 10/2/07, Anatoly Yakovenko [EMAIL PROTECTED] wrote:
 i am getting some weird memory usage out of this program:


 module Main where

 import Data.Binary
 import Data.List(foldl')


 main = do
let sum' = foldl' (+) 0
let list::[Int] = decode $ encode $ ([1..] :: [Int])
print $ sum' list
print done

 it goes up to 500M and down to 17M on windows.  Its build with ghc
 6.6.1 with the latest data.binary

 Any ideas what could be causing the memory usage to jump around so much?


 Thanks,
 Anatoly

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: bizarre memory usage with data.binary

2007-10-02 Thread Don Stewart
aeyakovenko:
 Program1:
 
 module Main where
 
 import Data.Binary
 import Data.List(foldl')
 
 
 main = do
   let sum' = foldl' (+) 0
   let list::[Int] = decode $ encode $ ([1..] :: [Int])
   print $ sum' list
   print done
 
 vs
 
 Program2:
 
 module Main where
 
 import Data.Binary
 import Data.List(foldl')
 
 
 main = do
   let sum' = foldl' (+) 0
   let list::[Int] = [1..]
   print $ sum' list
   print done
 
 neither program is expected to terminate.  The point of these examples
 is to demonstrate that Data.Binary encode and decode have some strange
 memory allocation patters.

The encode instance for lists is fairly strict:

instance Binary a = Binary [a] where
put l  = put (length l)  mapM_ put l
get= do n - get :: Get Int
replicateM n get

This is ok, since typically you aren't serialising infinite structures.

Use a newtype, and a lazier instance, if you need to do this.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: bizarre memory usage with data.binary

2007-10-02 Thread Dan Weston
Maybe what you are observing is that the operational semantics of 
undefined is undefined. The program can halt, run forever, use no 
memory, use all the memory.


Although I doubt what GHC does with this code is a random process, I 
don't think it's too meaningful to ask what are the space usage patterns 
of a program returning bottom.


Anatoly Yakovenko wrote:

Program1:

module Main where

import Data.Binary
import Data.List(foldl')


main = do
  let sum' = foldl' (+) 0
  let list::[Int] = decode $ encode $ ([1..] :: [Int])
  print $ sum' list
  print done

vs

Program2:

module Main where

import Data.Binary
import Data.List(foldl')


main = do
  let sum' = foldl' (+) 0
  let list::[Int] = [1..]
  print $ sum' list
  print done

neither program is expected to terminate.  The point of these examples
is to demonstrate that Data.Binary encode and decode have some strange
memory allocation patters.

If you run Program1, it will run forever, but its memory usage on my
machine goes to 500M then back down to 17M then back up to 500M then
back down to 17M... repeatedly.  I don't think this has anything to do
with running out of space in a 32 bit integer.

Program2 on the other hand runs at constant memory at around 2M.

Anatoly

On 10/2/07, Anatoly Yakovenko [EMAIL PROTECTED] wrote:

i am getting some weird memory usage out of this program:


module Main where

import Data.Binary
import Data.List(foldl')


main = do
   let sum' = foldl' (+) 0
   let list::[Int] = decode $ encode $ ([1..] :: [Int])
   print $ sum' list
   print done

it goes up to 500M and down to 17M on windows.  Its build with ghc
6.6.1 with the latest data.binary

Any ideas what could be causing the memory usage to jump around so much?


Thanks,
Anatoly


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: bizarre memory usage with data.binary

2007-10-02 Thread Anatoly Yakovenko
On 10/2/07, Don Stewart [EMAIL PROTECTED] wrote:
 aeyakovenko:
  Program1:
 
  module Main where
 
  import Data.Binary
  import Data.List(foldl')
 
 
  main = do
let sum' = foldl' (+) 0
let list::[Int] = decode $ encode $ ([1..] :: [Int])
print $ sum' list
print done

 The encode instance for lists is fairly strict:

 instance Binary a = Binary [a] where
 put l  = put (length l)  mapM_ put l
 get= do n - get :: Get Int
 replicateM n get

 This is ok, since typically you aren't serialising infinite structures.

hmm, this doesn't make sense to me, it goes up to 500M then down then
back up, then back down, so it doesn't just run out of memory because
of (length l) forces you to evaluate the entire list.

 Use a newtype, and a lazier instance, if you need to do this.

Thanks for the tip.  this runs at a constant 4M

module Main where

import Data.Binary
import Data.List(foldl')

data Foo = Foo Int Foo | Null

instance Binary Foo where
   put (Foo i f) = do put (0 :: Word8)
  put i
  put f
   put (Null)  = do put (1 :: Word8)
   get = do t - get :: Get Word8
case t of
   0 - do i - get
   f - get
   return (Foo i f)
   1 - do return Null

sumFoo zz (Null) = zz
sumFoo zz (Foo ii ff) = sumFoo (zz + ii) ff

fooBar i = Foo i (fooBar (i + 1))

main = do
   print $ sumFoo 0 $ decode $ encode $ fooBar 1
   print done
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: bizarre memory usage with data.binary

2007-10-02 Thread Anatoly Yakovenko
servers never terminate, pretend that i have a server that reads a
list encoded with data.binary from a socket, and sums it up and
returns the current sum.  i would expect it to run in constant memory,
never terminate, and do useful work.

which is basically the problem that I am facing right now.  my program
seems to grow randomly in memory use when marshaling large data types
encoded using data.binary.

On 10/2/07, Dan Weston [EMAIL PROTECTED] wrote:
 Maybe what you are observing is that the operational semantics of
 undefined is undefined. The program can halt, run forever, use no
 memory, use all the memory.

 Although I doubt what GHC does with this code is a random process, I
 don't think it's too meaningful to ask what are the space usage patterns
 of a program returning bottom.

 Anatoly Yakovenko wrote:
  Program1:
 
  module Main where
 
  import Data.Binary
  import Data.List(foldl')
 
 
  main = do
let sum' = foldl' (+) 0
let list::[Int] = decode $ encode $ ([1..] :: [Int])
print $ sum' list
print done
 
  vs
 
  Program2:
 
  module Main where
 
  import Data.Binary
  import Data.List(foldl')
 
 
  main = do
let sum' = foldl' (+) 0
let list::[Int] = [1..]
print $ sum' list
print done
 
  neither program is expected to terminate.  The point of these examples
  is to demonstrate that Data.Binary encode and decode have some strange
  memory allocation patters.
 
  If you run Program1, it will run forever, but its memory usage on my
  machine goes to 500M then back down to 17M then back up to 500M then
  back down to 17M... repeatedly.  I don't think this has anything to do
  with running out of space in a 32 bit integer.
 
  Program2 on the other hand runs at constant memory at around 2M.
 
  Anatoly
 
  On 10/2/07, Anatoly Yakovenko [EMAIL PROTECTED] wrote:
  i am getting some weird memory usage out of this program:
 
 
  module Main where
 
  import Data.Binary
  import Data.List(foldl')
 
 
  main = do
 let sum' = foldl' (+) 0
 let list::[Int] = decode $ encode $ ([1..] :: [Int])
 print $ sum' list
 print done
 
  it goes up to 500M and down to 17M on windows.  Its build with ghc
  6.6.1 with the latest data.binary
 
  Any ideas what could be causing the memory usage to jump around so much?
 
 
  Thanks,
  Anatoly
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: bizarre memory usage with data.binary

2007-10-02 Thread Felipe Almeida Lessa
On 10/2/07, Don Stewart [EMAIL PROTECTED] wrote:
 The encode instance for lists is fairly strict:

 instance Binary a = Binary [a] where
 put l  = put (length l)  mapM_ put l
 get= do n - get :: Get Int
 replicateM n get

 This is ok, since typically you aren't serialising infinite structures.

 Use a newtype, and a lazier instance, if you need to do this.

Maybe something like this (WARNING: ugly code, as in not elegant, follows):

newtype List a = List [a]

split255 :: [a] - (Word8, [a], [a])
split255 = s 0 where
s 255 xs   = (255, [], xs)
s n (x:xs) = let (n', ys, zs) = s (n+1) xs in (n', x:ys, zs)
s n [] = (n, [], [])

instance Binary a = Binary (List a) where
  put (List l) = let (n, xs, ys) = split255 l
 in do putWord8 n
   mapM_ put xs
   when (n == 255) (put $ List ys)
  get = do n - getWord8
   xs - replicateM (fromEnum n) get
   if n == 255
 then get = \(List ys) - return (List $ xs ++ ys)
 else return (List xs)


It uses chunks of 255 elements and so doesn't traverse the whole list
until starting to output something. OTOH, there's some data overhead
(1 byte every 255 elements). Seems to run your example fine and in
constant memory.

HTH,

-- 
Felipe.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: bizarre memory usage with data.binary

2007-10-02 Thread Don Stewart
aeyakovenko:
 servers never terminate, pretend that i have a server that reads a
 list encoded with data.binary from a socket, and sums it up and
 returns the current sum.  i would expect it to run in constant memory,
 never terminate, and do useful work.
 
 which is basically the problem that I am facing right now.  my program
 seems to grow randomly in memory use when marshaling large data types
 encoded using data.binary.

If its specifically the list instance, where we currently trade laziness
for efficiency of encoding (which may or may not be the right thing),
I'd suggest a fully lazy encoding instance?

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Assignment, Substitution or what?

2007-10-02 Thread PR Stanley



  f x = x + x
  Is the x use to create a pattern in the definition and when f is
  called it's replaced by a value?

Those equation-like definitions are syntactic sugar for lambda
abstractions. f could as well be defined as f = \x - x + x.

Please elaborate


First, the

f x =

part says that f is a function which takes a single parameter, 
called x.  The other side of the = sign gives the function body: in 
this case, x + x.  This is exactly the same thing that is expressed 
by the lambda expression


\x - x + x

This expression defines a function that takes a single parameter 
called x, and returns the value of x + x.  The only difference is 
that with the lambda expression, this function is not given a 
name.  But you can easily give the function a name (just as you can 
give any Haskell expression a name) by writing


f = \x - x + x

In general, writing

g x y z = blah blah

is just a shorthand for

g = \x - \y - \z - blah blah.

That is, it simultaneously creates a function expression, and 
assigns it a name.


Does that help?


Yes and thanks for the reply.
When a function is declared in C the argument variable has an address 
somewhere in the memory:

int f ( int x ) {
return x * x;
}

any value passed to f() is assigned to x. x is the identifier for a 
real slot in the memory (the stack most likely) made available for f().

Is this also what happens in Haskell?
Thanks, Paul

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: bizarre memory usage with data.binary

2007-10-02 Thread Anatoly Yakovenko
 If its specifically the list instance, where we currently trade laziness
 for efficiency of encoding (which may or may not be the right thing),
 I'd suggest a fully lazy encoding instance?

Its not really a list, its more of a tree that has shared nodes, so
something like this:

A
 / \
B  C
  \   /
   D
 /   \
EF

I suspect that maybe after encode/decode i end up with something like

A
 / \
B  C
   /  \
  D   D
 /   \/   \
EFE F
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Assignment, Substitution or what?

2007-10-02 Thread Derek Elkins
On Wed, 2007-10-03 at 01:42 +0100, PR Stanley wrote:
f x = x + x
Is the x use to create a pattern in the definition and when f is
called it's replaced by a value?
  
  Those equation-like definitions are syntactic sugar for lambda
  abstractions. f could as well be defined as f = \x - x + x.
 
 Please elaborate
 
 
 First, the
 
 f x =
 
 part says that f is a function which takes a single parameter, 
 called x.  The other side of the = sign gives the function body: in 
 this case, x + x.  This is exactly the same thing that is expressed 
 by the lambda expression
 
 \x - x + x
 
 This expression defines a function that takes a single parameter 
 called x, and returns the value of x + x.  The only difference is 
 that with the lambda expression, this function is not given a 
 name.  But you can easily give the function a name (just as you can 
 give any Haskell expression a name) by writing
 
 f = \x - x + x
 
 In general, writing
 
 g x y z = blah blah
 
 is just a shorthand for
 
 g = \x - \y - \z - blah blah.
 
 That is, it simultaneously creates a function expression, and 
 assigns it a name.
 
 Does that help?
 
 Yes and thanks for the reply.
 When a function is declared in C the argument variable has an address 
 somewhere in the memory:
 int f ( int x ) {
 return x * x;
 }
 
 any value passed to f() is assigned to x. x is the identifier for a 
 real slot in the memory (the stack most likely) made available for f().
 Is this also what happens in Haskell?

How would you tell?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: PROPOSAL: New efficient Unicode string library.

2007-10-02 Thread Isaac Dupree

Stefan O'Rear wrote:

On Tue, Oct 02, 2007 at 11:05:38PM +0200, Johan Tibell wrote:

I do not believe that anyone was seriously advocating multiple blessed
encodings.  The main question is *which* encoding to bless.  99+% of
text I encounter is in US-ASCII, so I would favor UTF-8.  Why is UTF-16
better for me?

All software I write professional have to support 40 languages
(including CJK ones) so I would prefer UTF-16 in case I could use
Haskell at work some day in the future. I dunno that who uses what
encoding the most is good grounds to pick encoding though. Ease of
implementation and speed on some representative sample set of text may
be.


UTF-8 supports CJK languages too.  The only question is efficiency


Due to the additional complexity of handling UTF-8 -- EVEN IF the actual 
text processed happens all to be US-ASCII -- will UTF-8 perhaps be less 
efficient than UTF-16, or only as fast?


Isaac
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: PROPOSAL: New efficient Unicode string library.

2007-10-02 Thread Brandon S. Allbery KF8NH


On Oct 2, 2007, at 21:12 , Isaac Dupree wrote:


Stefan O'Rear wrote:

On Tue, Oct 02, 2007 at 11:05:38PM +0200, Johan Tibell wrote:
I do not believe that anyone was seriously advocating multiple  
blessed
encodings.  The main question is *which* encoding to bless.  99+ 
% of
text I encounter is in US-ASCII, so I would favor UTF-8.  Why is  
UTF-16

better for me?

All software I write professional have to support 40 languages
(including CJK ones) so I would prefer UTF-16 in case I could use
Haskell at work some day in the future. I dunno that who uses what
encoding the most is good grounds to pick encoding though. Ease of
implementation and speed on some representative sample set of  
text may

be.

UTF-8 supports CJK languages too.  The only question is efficiency


Due to the additional complexity of handling UTF-8 -- EVEN IF the  
actual text processed happens all to be US-ASCII -- will UTF-8  
perhaps be less efficient than UTF-16, or only as fast?


UTF8 will be very slightly faster in the all-ASCII case, but quickly  
blows chunks if you have *any* characters that require multibyte.   
Given the way UTF8 encoding works, this includes even Latin-1 non- 
ASCII, never mind CJK.  (I think people have been missing that  
point.  UTF8 is only cheap for 00-7f, *nothing else*.)


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Assignment, Substitution or what?

2007-10-02 Thread ok

On 3 Oct 2007, at 1:42 pm, PR Stanley wrote:
When a function is declared in C the argument variable has an  
address somewhere in the memory:

int f ( int x ) {
return x * x;
}


Wrong.  On the machines I use, x will be passed in registers and will
never ever have an address in memory.  In fact, unless I hold the Sun
C compiler back by main force, it will probably open code f() so that
there isn't any f any more, let alone any x.  Here's the actual SPARC
code generated for f:

f:  retl
mulx%o0,%o0,%o0

No memory, no address.

In C, a function call is rather like entering a new block, declaring
some new variables (which might or might not ever go into memory) and
initialising them to the values of the arguments.

In Haskell, carrying out a function call is rather like creating some
new names and *binding* them to promises to evaluate the arguments
(if they are ever needed) and then returning a promise to evaluate the
body (if it is ever needed).  The key point is *binding* names to
(deferred calculations of) values, rather than *initialising*
mutable variables to (fully evaluated) values.  How this is done is
entirely up to the compiler.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] build problems with hscurses

2007-10-02 Thread brad clawsie
i solved this myself - for the sake of documenting the solution, i
obtained the darcs version with 

darcs get --set-scripts-executable
http://www.stefanwehr.de/darcs/hscurses/

and then a standard hackage install

and i think the --set-scripts-executable in this case is significant

On Tue, Oct 02, 2007 at 03:55:00PM -0700, brad clawsie wrote:
 hoping someone here can help me with a build problem
 
 when going through the hackage build stage, i get numerous errors
 like:
 
 HSCurses/Curses.hs::0:  invalid preprocessing directive #def
 
 where  ranges in lines from 1597 to 1621. 
 
 is there a special directive i need for runhaskell? any info
 appreciated. 
 
 (on freebsd6.2, ghc6.6.1, ncurses-5.6_1)
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe