[Haskell-cafe] Re: Building network package on Windows

2009-06-08 Thread Johan Tibell
On Mon, Jun 8, 2009 at 2:04 AM, Iavor Diatchki iavor.diatc...@gmail.comwrote:

 Hello,
 Here is an update, in case anyone else runs into the same problem.

 My understanding, is that the problem was caused by a mistake in the
 configure script for the network package, which after (correctly)
 detecting that IPv6 functionality was not available on my platform, it
 (incorrectly) tried to gain this functionality by redefining the
 version of my platform.  Concretely, apparently I have Windows Vista
 Basic Home Edition, which seems to identify itself as version 0x400,
 while the missing functions are only available on versions of windows
 = 0x501.

 My workaround was to:
  1. checkout the network package from the repository on code.haskell.com
  2. modify configure.ac to comment out the section where it sets the
 windows version to 0x501
  3. autoreconf
  4. build using the usual cabal way


Do you mind filing a bug at http://trac.haskell.org/network so we can track
this problem and hopefully fix it in the future (I'm afraid someone with
better Windows knowledge will have to do it)?

Thanks,

Johan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANNOUNCE: StrictBench 0.1 - Benchmarking code through strict evaluation

2009-06-08 Thread Niemeijer, R.A.
Hello everyone,

In the last month or so, I've found myself using the following snippet a lot:

import Control.Parallel.Strategies
import Test.BenchPress

bench 1 . print . rnf

This snippet fully evaluates a value and prints how long it took to do so. I 
regularly use it to see where the bottlenecks lie in my algorithms.  It has the 
minor annoyance, however, that it prints a lot of information (min, max, mean, 
median, percentiles) that is all identical, because I only run it once. The 
reason I only run it once is that I'm typically evaluating a pure value, which 
means that any subsequent attempts to benchmark the evaluation time will take 
no time at all, since it has already been evaluated.

To solve this, I decided to write a small library to make this process easier 
and only print the time taken once. The result is StrictBench, which can be 
found at http://hackage.haskell.org/cgi-bin/hackage-scripts/package/StrictBench.

A short example:

import Test.StrictBench

main = bench [1..1000 :: Integer]

This code would give

2890.625 ms

as output. For the rest of the documentation I refer you to the Hackage page.

Regards,
Remco Niemeijer
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Building network package on Windows

2009-06-08 Thread Thomas ten Cate
On Mon, Jun 8, 2009 at 02:04, Iavor Diatchkiiavor.diatc...@gmail.com wrote:
 Hello,
 Here is an update, in case anyone else runs into the same problem.

 My understanding, is that the problem was caused by a mistake in the
 configure script for the network package, which after (correctly)
 detecting that IPv6 functionality was not available on my platform, it
 (incorrectly) tried to gain this functionality by redefining the
 version of my platform.  Concretely, apparently I have Windows Vista
 Basic Home Edition, which seems to identify itself as version 0x400,
 while the missing functions are only available on versions of windows
= 0x501.

0x400 is, if I'm not mistaken, Windows 95. Vista is 0x600 [1]. I don't
think they *identify* themselves as such; rather, the program itself
specifies what Windows versions it wants to be able to run on.

In particular, the macros _WIN32_WINNT and WINVER should be defined as
the *minimum* platform version on which the compiled binary is to
work. Therefore, if functionality from XP (0x501) is needed, it is
perfectly okay to redefine these macros to 0x501. This will flip some
switches in included header files that enable declarations for the
desired functionality. Of course, the binary will then only run on
platforms that actually have this functionality.

Hope that clears things up a bit.

Thomas

[1] http://msdn.microsoft.com/en-us/library/aa383745.aspx
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haddock : parse error on input `{-# UNPACK'

2009-06-08 Thread David Waern
2009/6/7 Dominic Steinitz domi...@steinitz.org:
 Ha! It's yet another of haddock's quirks. If I replace -- ^ by -- then haddock
 accepts {-#. I'll update the ticket you created.

 -- | The parse state
 data S = S {-# UNPACK #-} !BL.ByteString  -- ^ input
           {-# UNPACK #-} !Int  -- ^ bytes read
           {-# UNPACK #-} ![B.ByteString]
           {-# UNPACK #-} !Int  -- ^ the failure depth

 -- | The parse state
 data S = S {-# UNPACK #-} !BL.ByteString  -- input
           {-# UNPACK #-} !Int  -- bytes read
           {-# UNPACK #-} ![B.ByteString]
           {-# UNPACK #-} !Int  -- the failure depth

Haddock doesn't actually support comments on individual constructor
arguments. People often get confused by this, and the error message
isn't especially useful. We have a ticket for giving better error
messages:

  http://trac.haskell.org/haddock/ticket/94

We also have a ticket about the feature itself:

  http://trac.haskell.org/haddock/ticket/95

David
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Slow documentation generation on Hackage

2009-06-08 Thread Niemeijer, R.A.
Hello everyone.

Last night I uploaded my first Hackage library with documentation 
(StrictBench). I learned that it takes somewhere between 2 and 8 hours for the 
link to the documentation to become active. This is confusing for first-time 
package authors (I went to #haskell to ask what I had forgotten) and annoying 
for everyone (Don Stewart submitted it to reddit shorly after I published it, 
which kind of wastes the time of everyone who checks it out before the 
documentation is up). Moreover, there seems to be no reason for it; on your own 
computer haddock takes just (milli)seconds.

Hence I wanted to ask if this is a bug or if there is a good technical or 
social reason for it, and whether there is any way around it.

Regards,
Remco Niemeijer
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Slow documentation generation on Hackage

2009-06-08 Thread Brandon S. Allbery KF8NH

On Jun 8, 2009, at 04:10 , Niemeijer, R.A. wrote:
Hence I wanted to ask if this is a bug or if there is a good  
technical or social reason for it, and whether there is any way  
around it.



Auto-running haddock on upload strikes me as a good way to open  
hackage.haskell.org to a denial of service attack.


Additionally, I *think* haddock is run as part of the automated build  
tests, which (again) happen on a regular schedule instead of being  
triggered by uploads to avoid potential denial of service attacks.


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allb...@kf8nh.com
system administrator [openafs,heimdal,too many hats] allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon universityKF8NH




PGP.sig
Description: This is a digitally signed message part
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Slow documentation generation on Hackage

2009-06-08 Thread Brandon S. Allbery KF8NH

On Jun 8, 2009, at 04:36 , Brandon S. Allbery KF8NH wrote:

On Jun 8, 2009, at 04:10 , Niemeijer, R.A. wrote:
Hence I wanted to ask if this is a bug or if there is a good  
technical or social reason for it, and whether there is any way  
around it.


Auto-running haddock on upload strikes me as a good way to open  
hackage.haskell.org to a denial of service attack.



I should clarify:  yes, in a valid project haddock takes almost no  
time.  Nevertheless:


(1) if many uploads of even valid packages are made in a very short  
time, the system load could well be severely impacted;
(2) what of malicious packages, which might trigger bugs in haddock  
leading to (say) 100% CPU loops?  That we don't know of any doesn't  
mean there aren't any, unless the test suite is absolutely 100%  
complete (and for a large program, that becomes as hard to verify as  
the program itself.  now consider that haddock is part of ghc these  
days...).


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allb...@kf8nh.com
system administrator [openafs,heimdal,too many hats] allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon universityKF8NH




PGP.sig
Description: This is a digitally signed message part
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Slow documentation generation on Hackage

2009-06-08 Thread Niemeijer, R.A.
If that is the main concern, would the following not work?

- Hackage accounts already have to be created manually, so there is no
  chance of a DDoS.
- Uploading to hackage requires a username and password, which means
  the user can be identified. Set a timeout on uploads for each user:
  packages sent within 2 minutes of the previous one are automatically
  refused. Prevents quantity-based DoS.
- Generate haddock docs immediately on upload, but apply a 2-second
  timeout; if it takes longer, the process is killed and no documentation
  is generated. Prevents exploit-based DoS.
- If many valid packages are uploaded in a short time (though I have my
  doubts as to how often that is going to happen), put them in a queue.
  Documentation will take a bit longer to generate, but the server can
  control the load. Prevents inadvertent DoS.

Result: immediate documentation for every contributor with good
intentions (which, face it, is going to be all of them; I doubt Haskell
is popular enough yet to be the target of DoS attacks) and no
possibility for DoS attacks. I might be overlooking something, but
I believe this should work just fine.

-Original Message-
From: Brandon S. Allbery KF8NH [mailto:allb...@ece.cmu.edu] 
Sent: maandag 8 juni 2009 10:41
To: Brandon S. Allbery KF8NH
Cc: Niemeijer, R.A.; haskell-cafe@haskell.org
Subject: Re: [Haskell-cafe] Slow documentation generation on Hackage

On Jun 8, 2009, at 04:36 , Brandon S. Allbery KF8NH wrote:
 On Jun 8, 2009, at 04:10 , Niemeijer, R.A. wrote:
 Hence I wanted to ask if this is a bug or if there is a good 
 technical or social reason for it, and whether there is any way 
 around it.

 Auto-running haddock on upload strikes me as a good way to open 
 hackage.haskell.org to a denial of service attack.


I should clarify:  yes, in a valid project haddock takes almost no  
time.  Nevertheless:

(1) if many uploads of even valid packages are made in a very short  
time, the system load could well be severely impacted;
(2) what of malicious packages, which might trigger bugs in haddock  
leading to (say) 100% CPU loops?  That we don't know of any doesn't  
mean there aren't any, unless the test suite is absolutely 100%  
complete (and for a large program, that becomes as hard to verify as  
the program itself.  now consider that haddock is part of ghc these  
days...).

-- 
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allb...@kf8nh.com
system administrator [openafs,heimdal,too many hats] allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cabal addressibility problem

2009-06-08 Thread Malcolm Wallace
Vasili I. Galchin vigalc...@gmail.com wrote:

 Executable GraphPartitionTest
Main-Is:Swish.HaskellRDF.GraphPartitionTest.hs
Other-modules:  Swish.HaskellRDF.GraphPartition
Swish.HaskellRDF.GraphClass
Swish.HaskellUtils.ListHelpers
Swish.HaskellUtils.TestHelpers

The Main-Is: line is wrong: it should be:

Main-Is:Swish/HaskellRDF/GraphPartitionTest.hs

That is, it should name a filepath (usings slashes), not a module (using
dots).

Regards,
Malcolm
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Slow documentation generation on Hackage

2009-06-08 Thread Ketil Malde
Niemeijer, R.A. r.a.niemei...@tue.nl writes:

 If that is the main concern, would the following not work?
  
  [...]

 Result: immediate documentation for every contributor with good
 intentions

Or simply, on upload, generate the doc directory with a temporary page
saying that documentation will arrive when it's good and ready?

Result: as before, but with less confusion for new contributors.  So
not as good as Niemeijer's suggestions, but probably easier to
implement. 

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Hsmagick crash

2009-06-08 Thread Ron de Bruijn

Hi,

I am trying to extract the image data from various file formats and it appeared 
that hsmagick would be the right package to use.


However, it doesn't actually work or I use it incorrectly. If you have installed 
hsmagick and change the value of some_png_file to some existing png file, you 
should see that it crashes at some random pixel. For the particular 256*256 
image I had, it crashed on pixel_nr `elem` [54,56,57].


I am open to suggestions for better ways to get a Array (Int,Int) RGB from e.g. 
a png file.


import Graphics.Transform.Magick.Images
import Graphics.Transform.Magick.Types
import Foreign.Storable
import Control.Monad

image_file_name_to_2d_array file =   do
himage - readImage file
let ptr_to_image = image himage
himage_ - peekElemOff ptr_to_image 0
let bounds@(_rows, _cols) = (rows himage_,columns himage_)
number_of_pixels  = fromIntegral _rows * fromIntegral _cols
mapM (\pixel_nr - do
   putStrLn (Pixel:  ++ show pixel_nr)
   pixel_packet - liftM background_color_  $
 peekElemOff
  ptr_to_image
  pixel_nr
   let red_component = red pixel_packet
   putStrLn (Pixel packet:  ++ show red_component)
   return red_component)
  [0.. number_of_pixels - 1]

some_png_file = foo.png

t = do
  initialize_image_library
  image_file_name_to_2d_array some_png_file

initialize_image_library = initializeMagick

Best regards,
 Ron de Bruijn
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hsmagick crash

2009-06-08 Thread Mark Wassell
Have you tried 
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/pngload ?


Mark


Ron de Bruijn wrote:

Hi,

I am trying to extract the image data from various file formats and it 
appeared that hsmagick would be the right package to use.


However, it doesn't actually work or I use it incorrectly. If you have 
installed hsmagick and change the value of some_png_file to some 
existing png file, you should see that it crashes at some random 
pixel. For the particular 256*256 image I had, it crashed on pixel_nr 
`elem` [54,56,57].


I am open to suggestions for better ways to get a Array (Int,Int) RGB 
from e.g. a png file.


import Graphics.Transform.Magick.Images
import Graphics.Transform.Magick.Types
import Foreign.Storable
import Control.Monad

image_file_name_to_2d_array file =   do
himage - readImage file
let ptr_to_image = image himage
himage_ - peekElemOff ptr_to_image 0
let bounds@(_rows, _cols) = (rows himage_,columns himage_)
number_of_pixels  = fromIntegral _rows * fromIntegral _cols
mapM (\pixel_nr - do
   putStrLn (Pixel:  ++ show pixel_nr)
   pixel_packet - liftM background_color_  $
 peekElemOff
  ptr_to_image
  pixel_nr
   let red_component = red pixel_packet
   putStrLn (Pixel packet:  ++ show red_component)
   return red_component)
  [0.. number_of_pixels - 1]

some_png_file = foo.png

t = do
  initialize_image_library
  image_file_name_to_2d_array some_png_file

initialize_image_library = initializeMagick

Best regards,
 Ron de Bruijn
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: StrictBench 0.1 - Benchmarking code through strict evaluation

2009-06-08 Thread Magnus Therning
On Mon, Jun 8, 2009 at 8:20 AM, Niemeijer, R.A.r.a.niemei...@tue.nl wrote:
 Hello everyone,

 In the last month or so, I've found myself using the following snippet a lot:

 import Control.Parallel.Strategies
 import Test.BenchPress

 bench 1 . print . rnf

 This snippet fully evaluates a value and prints how long it took to do so. I 
 regularly use it to see where the bottlenecks lie in my algorithms.  It has 
 the minor annoyance, however, that it prints a lot of information (min, max, 
 mean, median, percentiles) that is all identical, because I only run it once. 
 The reason I only run it once is that I'm typically evaluating a pure value, 
 which means that any subsequent attempts to benchmark the evaluation time 
 will take no time at all, since it has already been evaluated.

 To solve this, I decided to write a small library to make this process easier 
 and only print the time taken once. The result is StrictBench, which can be 
 found at 
 http://hackage.haskell.org/cgi-bin/hackage-scripts/package/StrictBench.

Is there no way to force repeated evaluation of a pure value?  (It'd
be nice to be able to perform time measurements on pure code so that
it's possible to compare Haskell implementations of algorithms to
implementations in other languages, without running into confounding
factors.)

/M

-- 
Magnus Therning(OpenPGP: 0xAB4DFBA4)
magnus@therning.org  Jabber: magnus@therning.org
http://therning.org/magnus identi.ca|twitter: magthe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monad transformer responsibilities

2009-06-08 Thread Stephan Friedrichs
Henning Thielemann wrote:
 [...]
 
  - So you have to declare them near the test cases and they're orphan
instances

 The entire project doesn't issue a single warning when compiling with
 -Wall *except* two orphan instances when building the test cases...
 
 However, I had sometimes the case, where a type from another library was
 part of my tests and thus I needed its Arbitrary instance. I could have
 defined instances for the foreign types, but they would have been orphan
 and I risk that the library author decides to add the instances later.
 

Hmm... maybe it is a good idea to aktivate the instance declaration with
a cabal flag? I've already got:

Flag Test
  Description:   Build a binary running test cases
  Default:   False

and I could easily add something like

if flag( Test )
  CPP-Options:   -D__TEST__
  Build-Depends: QuickCheck = 2   3

and

data MyType = ...

#ifdef __TEST__
instance Arbitrary MyType where
...
#endif

A usage of cabal flags that strongly reminds me of Gentoo's useflags :)
However, this will result in a total mess with more than one such flag...

//Stephan


-- 

Früher hieß es ja: Ich denke, also bin ich.
Heute weiß man: Es geht auch so.

 - Dieter Nuhr
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hsmagick crash

2009-06-08 Thread Ron de Bruijn

Mark Wassell schreef:
 Have you tried
 http://hackage.haskell.org/cgi-bin/hackage-scripts/package/pngload ?
Hi Mark,

I just did:

import Codec.Image.PNG

png_file_to_2d_array file = do
  either_error_string_or_png - loadPNGFile file
  either
(\s - error $ (png_file_to_2d_array)  ++ s)
(\png -
  putStrLn (show (dimensions png))
  )
either_error_string_or_png

and then calling it gives:

*** Exception: (png_file_to_2d_array) failed to parse chunk IHDR, (line 1, 
column 1):

unexpected 0x0
expecting valid colorType: supported Ct2,Ct6

Best regards, Ron
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Slow documentation generation on Hackage

2009-06-08 Thread Thomas ten Cate
On Mon, Jun 8, 2009 at 11:05, Niemeijer, R.A.r.a.niemei...@tue.nl wrote:
 which, face it, is going to be all of them; I doubt Haskell
 is popular enough yet to be the target of DoS attacks

Second that. I think this is a good case in which some security should
be traded in for usability. And even if a DoS attack occurs, it just
causes some downtime... not unlike the certain hours of downtime that
the documentation currently has.

If there's actually an exploitable leak in Haddock that allows the
server to be compromised (not just DoSed), then the current Haddock
generation is just as vulnerable.

But of course I'm not the maintainers of Hackage... it's up to them to decide.

Thomas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hsmagick crash

2009-06-08 Thread Thomas ten Cate
On Mon, Jun 8, 2009 at 13:11, Ron de Bruijnr...@gamr7.com wrote:
 Mark Wassell schreef:
 Have you tried
 http://hackage.haskell.org/cgi-bin/hackage-scripts/package/pngload ?
 Hi Mark,

 I just did:

 import Codec.Image.PNG

 png_file_to_2d_array file = do
  either_error_string_or_png - loadPNGFile file
  either
    (\s - error $ (png_file_to_2d_array)  ++ s)
    (\png -
      putStrLn (show (dimensions png))
      )
    either_error_string_or_png

 and then calling it gives:

 *** Exception: (png_file_to_2d_array) failed to parse chunk IHDR, (line 1,
 column 1):
 unexpected 0x0
 expecting valid colorType: supported Ct2,Ct6

Testing this code with the PNG file from [1] gives me

(png_file_to_2d_array) PNG_transparency_demonstration_2.png (line 1,
column 1):
unexpected 0x2d

I guess that proves that it's not just you, but not much else.

Thomas

[1] 
http://upload.wikimedia.org/wikipedia/commons/9/9a/PNG_transparency_demonstration_2.png
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Maintaining laziness: another example

2009-06-08 Thread Martijn van Steenbergen

Hello,

While grading a Haskell student's work I ran into an example of a 
program not being lazy enough. Since it's such a basic and nice example 
I thought I'd share it with you:


One part of the assignment was to define append :: [a] - [a] - [a], 
another to define cycle2 :: [a] - [a]. This was her (the majority of 
the students in this course is female!) solution:



append :: [a] - [a] - [a]
append [] ys = ys
append xs [] = xs
append (x:xs) ys = x : (append xs ys)

cycle2 :: [a] - [a]
cycle2 [] = error empty list
cycle2 xs = append xs (cycle2 xs)


This definition of append works fine on any non-bottom input (empty, 
finite, infinite), but due to the unnecessary second append case, cycle2 
never produces any result.


Martijn.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Maintaining laziness: another example

2009-06-08 Thread Eugene Kirpichov
Cool! Probably one should start teaching with 'case' instead of
pattern function definitions; that would put an accent on what is
forced and in what order. Only after the student understands the
laziness issues, introduce pattern signatures.

2009/6/8 Martijn van Steenbergen mart...@van.steenbergen.nl:
 Hello,

 While grading a Haskell student's work I ran into an example of a program
 not being lazy enough. Since it's such a basic and nice example I thought
 I'd share it with you:

 One part of the assignment was to define append :: [a] - [a] - [a],
 another to define cycle2 :: [a] - [a]. This was her (the majority of the
 students in this course is female!) solution:

 append :: [a] - [a] - [a]
 append [] ys = ys
 append xs [] = xs
 append (x:xs) ys = x : (append xs ys)

 cycle2 :: [a] - [a]
 cycle2 [] = error empty list
 cycle2 xs = append xs (cycle2 xs)

 This definition of append works fine on any non-bottom input (empty, finite,
 infinite), but due to the unnecessary second append case, cycle2 never
 produces any result.

 Martijn.
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Eugene Kirpichov
Web IR developer, market.yandex.ru
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: StrictBench 0.1 - Benchmarking code through strict evaluation

2009-06-08 Thread Martijn van Steenbergen

Magnus Therning wrote:

Is there no way to force repeated evaluation of a pure value?  (It'd
be nice to be able to perform time measurements on pure code so that
it's possible to compare Haskell implementations of algorithms to
implementations in other languages, without running into confounding
factors.)


I'm really curious about this too.

My guess is that the answer is no because doing so would (among other 
things) mean a thunk have to be copied first before it is evaluated, to 
preserve the unevaluated version. And what guarantee is there that 
values further down the expression haven't been evaluated already? 
Efficient lazy evaluation is hard; inefficient lazy evalation is even 
harder. ;-)


Martijn.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Maintaining laziness: another example

2009-06-08 Thread Jan Christiansen

Hi,

this is a very nice example.

On 08.06.2009, at 14:31, Eugene Kirpichov wrote:


Cool! Probably one should start teaching with 'case' instead of
pattern function definitions; that would put an accent on what is
forced and in what order.


I like this idea.

Only after the student understands the laziness issues, introduce  
pattern signatures.



But I think this will not solve the problem. Even for an experienced  
Haskell programmer it is not easy to check whether a function is too  
strict. I think it would be helpful if you define your functions using  
case expressions but in general you do not want to do this. For  
example consider the implementation of insect of Data.List.

 intersect xs ys = [x | x - xs, any (==x) ys]
This definition is too strict. The evaluation of intersect [] [1..]  
yields [] while the evaluation of intersect [1..] [] does not  
terminate. This function can be improved such that it yields the empty  
list in both cases. This function was probably not implemented by a  
Haskell novice ; )


Cheers, Jan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Maintaining laziness: another example

2009-06-08 Thread Martijn van Steenbergen

Jan Christiansen wrote:
This definition is too strict. The evaluation of intersect [] [1..] 
yields [] while the evaluation of intersect [1..] [] does not terminate. 
This function can be improved such that it yields the empty list in both 
cases. This function was probably not implemented by a Haskell novice ; )


Right, so unlike in append in this example the extra case for [] is 
actually desirable! Welcome to Haskell. ;-)


Martijn.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] ANNOUNCE: testrunner-0.9

2009-06-08 Thread Max Bolingbroke
Your feature list sounds like an almost exact duplicate of that for my
test-framework package, which has been available on Hackage for months
(although it's almost totally unadvertised!):

https://github.com/batterseapower/test-framework/tree
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/test-framework

There is also John's testpack:
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/testpack

Seems like we need to advertise these better :-)

Cheers,
Max

2009/6/8 Reinier Lamers tux_roc...@reinier.de:
 Dear all,

 testrunner is a new framework to run unit tests. It has the following
 distinguishing features:

 * It can run unit tests in parallel.
 * It can run QuickCheck and HUnit tests as well as simple boolean expressions.
 * It comes with a ready-made main function for your unit test executable.
 * This main function recognizes command-line arguments to select tests by name
  and replay QuickCheck tests.

 testrunner was conceived as a part of the darcs project.

 The home page is http://projects.haskell.org/testrunner/.

 testrunner can be downloaded from
 http://projects.haskell.org/testrunner/releases/testrunner-0.9.tar.gz, or with
 darcs with a darcs get http://code.haskell.org/testrunner/;.

 Regards,
 Reinier Lamers (tux_rocker)


 ___
 Haskell mailing list
 hask...@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: StrictBench 0.1 - Benchmarking code through strict evaluation

2009-06-08 Thread Magnus Therning
On Mon, Jun 8, 2009 at 1:56 PM, Martijn van
Steenbergenmart...@van.steenbergen.nl wrote:
 Magnus Therning wrote:

 Is there no way to force repeated evaluation of a pure value?  (It'd
 be nice to be able to perform time measurements on pure code so that
 it's possible to compare Haskell implementations of algorithms to
 implementations in other languages, without running into confounding
 factors.)

 I'm really curious about this too.

 My guess is that the answer is no because doing so would (among other
 things) mean a thunk have to be copied first before it is evaluated, to
 preserve the unevaluated version. And what guarantee is there that values
 further down the expression haven't been evaluated already? Efficient lazy
 evaluation is hard; inefficient lazy evalation is even harder. ;-)

Yes, I guessed as much.  I was hoping that there might be some way of
tricking GHC into being more inefficient though, something like a
rollback in evaluation state.

/M

-- 
Magnus Therning(OpenPGP: 0xAB4DFBA4)
magnus@therning.org  Jabber: magnus@therning.org
http://therning.org/magnus identi.ca|twitter: magthe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: StrictBench 0.1 - Benchmarking code through strict evaluation

2009-06-08 Thread Kenneth Hoste

Hi all,

On Jun 8, 2009, at 15:12 , Magnus Therning wrote:


On Mon, Jun 8, 2009 at 1:56 PM, Martijn van
Steenbergenmart...@van.steenbergen.nl wrote:

Magnus Therning wrote:


Is there no way to force repeated evaluation of a pure value?  (It'd
be nice to be able to perform time measurements on pure code so that
it's possible to compare Haskell implementations of algorithms to
implementations in other languages, without running into confounding
factors.)


I'm really curious about this too.

My guess is that the answer is no because doing so would (among  
other
things) mean a thunk have to be copied first before it is  
evaluated, to
preserve the unevaluated version. And what guarantee is there that  
values
further down the expression haven't been evaluated already?  
Efficient lazy

evaluation is hard; inefficient lazy evalation is even harder. ;-)


Yes, I guessed as much.  I was hoping that there might be some way of
tricking GHC into being more inefficient though, something like a
rollback in evaluation state.


I've been playing around with MicroBench [1], and I believe there is a  
way

to trick GHC (at least the 6.10.2 version) into being inefficient.

Below is a snippet of the code I used to benchmark various  
implementations

of a function. The key is partial function application and the IO monad.
Don't ask me why it works, but I believe it does.

-- benchmark a given function by applying it n times to the given value
benchmark :: (a - b) - a - Int - IO()
benchmark f x n = do
r - mapM (\y - return $! f y) (replicate n x)
performGC
return ()

The performGC might not be 100% necessary, but I see it as a part of the
function evaluation (i.e. make the runtime clean up the mess the  
function made).

Of course this assumes performGC to be called before using benchmark.

Note: MicroBench was doing something similar, but was using mapM_  
instead,
which no longer seems to fool GHC into evaluating the function n  
times. mapM

does seem to work though.

K.

--

Kenneth Hoste
Paris research group - ELIS - Ghent University, Belgium
email: kenneth.ho...@elis.ugent.be
website: http://www.elis.ugent.be/~kehoste
blog: http://boegel.kejo.be

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: StrictBench 0.1 - Benchmarking code through strict evaluation

2009-06-08 Thread Sebastian Fischer


On Jun 8, 2009, at 2:56 PM, Martijn van Steenbergen wrote:


Is there no way to force repeated evaluation of a pure value?

I'm really curious about this too.


could it be done by wrapping the computation in a function which is  
repeatedly called and compiling with -fno-full-laziness to prevent the  
compiler from floating the computation to the top-level?



--
Underestimating the novelty of the future is a time-honored tradition.
(D.G.)



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Maintaining laziness: another example

2009-06-08 Thread Raynor Vliegendhart
This might be slightly related. When I was assisting a Haskell lab
course, I encountered solutions like the following:

 removeRoot :: BSTree a - BSTree a
 removeRoot (Node x Empty Empty) = Empty
 removeRoot (Node x left  Empty) = left
 removeRoot (Node x Empty right) = right
 removeRoot (Node x left  right) = undefined {- not needed for discussion -}

The first removeRoot case is unnecessary. Students new to Haskell (or
maybe new to recursion in general?) seem to consider more base cases
than needed.


-Raynor


On 6/8/09, Martijn van Steenbergen mart...@van.steenbergen.nl wrote:
 Hello,

 While grading a Haskell student's work I ran into an example of a program
 not being lazy enough. Since it's such a basic and nice example I thought
 I'd share it with you:

 One part of the assignment was to define append :: [a] - [a] - [a],
 another to define cycle2 :: [a] - [a]. This was her (the majority of the
 students in this course is female!) solution:

  append :: [a] - [a] - [a]
  append [] ys = ys
  append xs [] = xs
  append (x:xs) ys = x : (append xs ys)
 
  cycle2 :: [a] - [a]
  cycle2 [] = error empty list
  cycle2 xs = append xs (cycle2 xs)
 

 This definition of append works fine on any non-bottom input (empty, finite,
 infinite), but due to the unnecessary second append case, cycle2 never
 produces any result.

 Martijn.
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] purely functional lazy non-deterministic programming

2009-06-08 Thread Sebastian Fischer

[crosspost from Haskell-libraries and Curry mailing list]

Dear Haskell and Curry programmers,

there is now a Haskell library that supports lazy functional-logic  
programming in Haskell. It is available from


http://sebfisch.github.com/explicit-sharing

and can be obtained from Hackage using cabal-install. The project page  
links to tutorials that explain how to use the library and to an  
ICFP'09 paper (joint work with Oleg Kiselyov and Chung-chieh Shan)  
that explains the implemented ideas in depth.


Have fun!
Sebastian

--
Underestimating the novelty of the future is a time-honored tradition.
(D.G.)



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Problem with Data.Map

2009-06-08 Thread michael rice
I'm trying to understand Map type for possible use in another problem I'm 
working on, but am stymied right off the bat.

==Here's my source:

import Data.Map (Map)
import qualified Data.Map as Map

l1 = abc

l2 = [1,2,3]

==Here's my error:

Prelude :l maptest
[1 of 1] Compiling Main ( maptest.hs, interpreted )
Ok, modules loaded: Main.
*Main l1
Loading package syb ... linking ... done.
Loading package array-0.2.0.0 ... linking ... done.
Loading package containers-0.2.0.0 ... linking ... done.
abc
*Main l2
[1,2,3]
*Main zip l1 l2
[('a',1),('b',2),('c',3)]
*Main fromList $ zip l1 l2

interactive:1:0: Not in scope: `fromList'
*Main 

===

Why isn't this working?

Michael




  ___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problem with Data.Map

2009-06-08 Thread Jochem Berndsen
michael rice wrote:
 I'm trying to understand Map type for possible use in another problem I'm 
 working on, but am stymied right off the bat.
 
 ==Here's my source:
 
 import Data.Map (Map)
 import qualified Data.Map as Map
 
 *Main fromList $ zip l1 l2
 
 interactive:1:0: Not in scope: `fromList'

You imported map qualified as Map, that means that only 'Map.fromList'
and 'Data.Map.fromList' are in scope, and not 'fromList'. The reason one
normally does it like this is that a lot of function names clash with
the Prelude (on purpose). Normally one uses qualified as M or
qualified as Map to shorten the notation.

HTH,

-- 
Jochem Berndsen | joc...@functor.nl
GPG: 0xE6FABFAB
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problem with Data.Map

2009-06-08 Thread michael rice
I don't understand your response. I copied the imports from Hoogles Data.Map 
page. What should the imports be?

Michael

--- On Mon, 6/8/09, Jochem Berndsen joc...@functor.nl wrote:

From: Jochem Berndsen joc...@functor.nl
Subject: Re: [Haskell-cafe] Problem with Data.Map
To: michael rice nowg...@yahoo.com
Cc: haskell-cafe@haskell.org
Date: Monday, June 8, 2009, 12:45 PM

michael rice wrote:
 I'm trying to understand Map type for possible use in another problem I'm 
 working on, but am stymied right off the bat.
 
 ==Here's my source:
 
 import Data.Map (Map)
 import qualified Data.Map as Map
 
 *Main fromList $ zip l1 l2
 
 interactive:1:0: Not in scope: `fromList'

You imported map qualified as Map, that means that only 'Map.fromList'
and 'Data.Map.fromList' are in scope, and not 'fromList'. The reason one
normally does it like this is that a lot of function names clash with
the Prelude (on purpose). Normally one uses qualified as M or
qualified as Map to shorten the notation.

HTH,

-- 
Jochem Berndsen | joc...@functor.nl
GPG: 0xE6FABFAB



  ___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problem with Data.Map

2009-06-08 Thread michael rice
Gotcha. Thanks!

Also wondering why I need two imports for one module.

Michael

--- On Mon, 6/8/09, Jochem Berndsen joc...@functor.nl wrote:

From: Jochem Berndsen joc...@functor.nl
Subject: Re: [Haskell-cafe] Problem with Data.Map
To: michael rice nowg...@yahoo.com
Cc: haskell-cafe@haskell.org
Date: Monday, June 8, 2009, 12:52 PM

michael rice wrote:
 I don't understand your response. I copied the imports from Hoogles Data.Map 
 page. What should the imports be?
 
 Michael

The imports are fine, but instead of 'fromList' you should use
'Map.fromList' or 'Data.Map.fromList'.

Regards,
-- 
Jochem Berndsen | joc...@functor.nl
GPG: 0xE6FABFAB



  ___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problem with Data.Map

2009-06-08 Thread Jochem Berndsen
michael rice wrote:
 Gotcha. Thanks!
 
 Also wondering why I need two imports for one module.

This is not strictly necessary, but the scoping also applies to the type
'Map' itself, thus leaving the
import Data.Map (Map)
(this brings only the name Map in scope from module Data.Map) out
would force you to write Data.Map.Map everywhere instead of just 'Map'.

Regards,

-- 
Jochem Berndsen | joc...@functor.nl
GPG: 0xE6FABFAB
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problem with Data.Map

2009-06-08 Thread minh thu
Hi,

 import Blurp
bring every thing defined in the Blurp module in scope.
So if blah is defined in blurp,
 blah
will work as expected.

 import qualified Blurp as B
does the same thing but everything defined in Blurp should be
qualified (i.e. prefixed) with B.
 B.blah
will work, not
 blah

So your import statement is right but you should write Map.fromList
instead of just fromList.

The goal is to have multiple identical names from different modules
and still be able to use them at the same time, but qualified, for
instance lookup is defined in both Data.List and Data.Map.

 import qualified Data.List as L
 import qualified Data.Map as M
 L.lookup
 M.lookup
The two last lines are unambiguous.

Cheers,
Thu


2009/6/8 michael rice nowg...@yahoo.com:
 I don't understand your response. I copied the imports from Hoogles Data.Map
 page. What should the imports be?

 Michael

 --- On Mon, 6/8/09, Jochem Berndsen joc...@functor.nl wrote:

 From: Jochem Berndsen joc...@functor.nl
 Subject: Re: [Haskell-cafe] Problem with Data.Map
 To: michael rice nowg...@yahoo.com
 Cc: haskell-cafe@haskell.org
 Date: Monday, June 8, 2009, 12:45 PM

 michael rice wrote:
 I'm trying to understand Map type for possible use in another problem I'm
 working on, but am stymied right off the bat.

 ==Here's my source:

 import Data.Map (Map)
 import qualified Data.Map as Map

 *Main fromList $ zip l1 l2

 interactive:1:0: Not in scope: `fromList'

 You imported map qualified as Map, that means that only 'Map.fromList'
 and 'Data.Map.fromList' are in scope, and not 'fromList'. The reason one
 normally does it like this is that a lot of function names clash with
 the Prelude (on purpose). Normally one uses qualified as M or
 qualified as Map to shorten the notation.

 HTH,

 --
 Jochem Berndsen | joc...@functor.nl
 GPG: 0xE6FABFAB


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problem with Data.Map

2009-06-08 Thread Jochem Berndsen
michael rice wrote:
 I don't understand your response. I copied the imports from Hoogles Data.Map 
 page. What should the imports be?
 
 Michael

The imports are fine, but instead of 'fromList' you should use
'Map.fromList' or 'Data.Map.fromList'.

Regards,
-- 
Jochem Berndsen | joc...@functor.nl
GPG: 0xE6FABFAB
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problem with Data.Map

2009-06-08 Thread michael rice
Thanks, guys.

Sounds like Lisp packages. Now I see the functionality of the A or B, etc.: 
Fewer keystrokes.

Michael


--- On Mon, 6/8/09, minh thu not...@gmail.com wrote:

From: minh thu not...@gmail.com
Subject: Re: [Haskell-cafe] Problem with Data.Map
To: michael rice nowg...@yahoo.com
Cc: Jochem Berndsen joc...@functor.nl, haskell-cafe@haskell.org
Date: Monday, June 8, 2009, 1:00 PM

Hi,

 import Blurp
bring every thing defined in the Blurp module in scope.
So if blah is defined in blurp,
 blah
will work as expected.

 import qualified Blurp as B
does the same thing but everything defined in Blurp should be
qualified (i.e. prefixed) with B.
 B.blah
will work, not
 blah

So your import statement is right but you should write Map.fromList
instead of just fromList.

The goal is to have multiple identical names from different modules
and still be able to use them at the same time, but qualified, for
instance lookup is defined in both Data.List and Data.Map.

 import qualified Data.List as L
 import qualified Data.Map as M
 L.lookup
 M.lookup
The two last lines are unambiguous.

Cheers,
Thu


2009/6/8 michael rice nowg...@yahoo.com:
 I don't understand your response. I copied the imports from Hoogles Data.Map
 page. What should the imports be?

 Michael

 --- On Mon, 6/8/09, Jochem Berndsen joc...@functor.nl wrote:

 From: Jochem Berndsen joc...@functor.nl
 Subject: Re: [Haskell-cafe] Problem with Data.Map
 To: michael rice nowg...@yahoo.com
 Cc: haskell-cafe@haskell.org
 Date: Monday, June 8, 2009, 12:45 PM

 michael rice wrote:
 I'm trying to understand Map type for possible use in another problem I'm
 working on, but am stymied right off the bat.

 ==Here's my source:

 import Data.Map (Map)
 import qualified Data.Map as Map

 *Main fromList $ zip l1 l2

 interactive:1:0: Not in scope: `fromList'

 You imported map qualified as Map, that means that only 'Map.fromList'
 and 'Data.Map.fromList' are in scope, and not 'fromList'. The reason one
 normally does it like this is that a lot of function names clash with
 the Prelude (on purpose). Normally one uses qualified as M or
 qualified as Map to shorten the notation.

 HTH,

 --
 Jochem Berndsen | joc...@functor.nl
 GPG: 0xE6FABFAB


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





  ___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Building network package on Windows

2009-06-08 Thread Iavor Diatchki
Hi,
Interesting.  In that case, does anyone have any ideas about the
linker errors?
-Iavor

On Mon, Jun 8, 2009 at 12:42 AM, Thomas ten Catettenc...@gmail.com wrote:
 On Mon, Jun 8, 2009 at 02:04, Iavor Diatchkiiavor.diatc...@gmail.com wrote:
 Hello,
 Here is an update, in case anyone else runs into the same problem.

 My understanding, is that the problem was caused by a mistake in the
 configure script for the network package, which after (correctly)
 detecting that IPv6 functionality was not available on my platform, it
 (incorrectly) tried to gain this functionality by redefining the
 version of my platform.  Concretely, apparently I have Windows Vista
 Basic Home Edition, which seems to identify itself as version 0x400,
 while the missing functions are only available on versions of windows
= 0x501.

 0x400 is, if I'm not mistaken, Windows 95. Vista is 0x600 [1]. I don't
 think they *identify* themselves as such; rather, the program itself
 specifies what Windows versions it wants to be able to run on.

 In particular, the macros _WIN32_WINNT and WINVER should be defined as
 the *minimum* platform version on which the compiled binary is to
 work. Therefore, if functionality from XP (0x501) is needed, it is
 perfectly okay to redefine these macros to 0x501. This will flip some
 switches in included header files that enable declarations for the
 desired functionality. Of course, the binary will then only run on
 platforms that actually have this functionality.

 Hope that clears things up a bit.

 Thomas

 [1] http://msdn.microsoft.com/en-us/library/aa383745.aspx

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] ANNOUNCE: testrunner-0.9

2009-06-08 Thread Max Bolingbroke
2009/6/8 Reinier Lamers tux_roc...@reinier.de:
 I checked out testpack and that did not meet my requirements. I don't know if
 I considered test-framework. If I did, it may be that I was turned off by the
 fact that the 'home page' link on cabal just goes to a web presentation of the
 source tree on github.

Reinier,

You are quite right that this is a weakness. I've been meaning to put
a site together for a while, and your comment gave me the impetus to
do it:

http://batterseapower.github.com/test-framework/

That's much friendlier!

All the best,
Max
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Why do ghc-built binaries use timer_create?

2009-06-08 Thread Maurí­cio

This comes from an issue in haskell-beginner, although
it have already been touched here. If you use recent
versions of ghc to build a program and try the resulting
binary on an old linux distro, you may get a message
about timer_create receiving the wrong parameters.

Curiously, as sugested in the other thread, it seems
building with

ghc -threaded

solves the problem. So, I'm just curious: how is
timer_create used, and why maybe it's not used with
-threaded?

Thanks,
Maurício


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Incremental XML parsing with namespaces?

2009-06-08 Thread John Millikin
I'm trying to convert an XML document, incrementally, into a sequence
of XML events. A simple example XML document:

doc xmlns=org:myproject:mainns xmlns:x=org:myproject:otherns
   titleDoc title/title
   x:refabc1234/x:ref
   html xmlns=http://www.w3.org/1999/xhtml;bodyHello world!/body/html
/doc

The document can be very large, and arrives in chunks over a socket,
so I need to be able to feed the text data into a parser and receive
a list of XML events per chunk. Chunks can be separated in time by
intervals of several minutes to an hour, so pausing processing for the
arrival of the entire document is not an option. The type signatures
would be something like:

type Namespace = String
type LocalName = String

data Attribute = Attribute Namespace LocalName String

data XMLEvent =
   EventElementBegin Namespace LocalName [Attribute] |
   EventElementEnd Namespace LocalName |
   EventContent String |
   EventError String

parse :: Parser - String - (Parser, [XMLEvent])

I've looked at HaXml, HXT, and hexpat, and unless I'm missing
something, none of them can achieve this:

+ HaXml and hexpat seem to disregard namespaces entirely -- that is,
the root element is parsed to doc instead of
(org:myproject:mainns, doc), and the second child is x:ref
instead of (org:myproject:otherns, ref). Obviously, this makes
parsing mixed-namespace documents effectively impossible. I found an
email from 2004[1] that mentions a filter for namespace support in
HaXml, but no further information and no working code.

+ HXT looks promising, because I see explicit mention in the
documentation of recording and propagating namespaces. However, I
can't figure out if there's an incremental mode. A page on the wiki[2]
suggests that SAX is supported in the html tag soup parser, but I
want incremental parsing of *valid* documents. If incremental parsing
is supported by the standard arrow interface, I don't see any
obvious way to pull events out into a list -- I'm a Haskell newbie,
and still haven't quite figured out monads yet, let alone Arrows.

Are there any libraries that support namespace-aware incremental parsing?

[1] http://www.haskell.org/pipermail/haskell-cafe/2004-June/006252.html
[2] 
http://www.haskell.org/haskellwiki/HXT/Conversion_of_Haskell_data_from/to_XML
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Applying Data.Map

2009-06-08 Thread michael rice
I wrote a Haskell solution for the Prolog problem stated below. I had written a 
function SQUISH before discovering that NUB does the same thing. While the 
solution works, I thought maybe I could apply some functions in the Data.Map 
module, and so wrote a second version of SERIALIZE, one no longer needing 
TRANSLATE. Using the Data.Map module is probably overkill for this particular 
problem, but wanted to familiarize myself with Map type. Suggestions welcome. 
Prolog code also included below for those interested.

Michael 

===

{-
 From Prolog By Example, Coelho, Cotta, Problem 42, pg. 63

   Verbal statement:
   Generate a list of serial numbers for the items of a given list,
   the members of which are to be numbered in alphabetical order.

   For example, the list [p,r,o,l,o,g] must generate [4,5,3,2,3,1]
-}

{-
Prelude :l serialize
[1 of 1] Compiling Main ( serialize.hs, interpreted )
Ok, modules loaded: Main.
*Main serialize prolog
[4,5,3,2,3,1]
*Main
-} 

===Haskell code==

import Data.Char
import Data.List
import Data.Map (Map)
import qualified Data.Map as Map

{-
translate :: [Char] - [(Char,Int)] - [Int]
translate [] _ = []
translate (x:xs) m = (fromJust (lookup x m)) : (translate xs m )
-}

{-
serialize :: [Char] - [Int]
serialize s = let c = nub $ sort s
  n = [1..(length c)]
  in translate s (zip c n)
-}

serialize :: [Char] - [Int]
serialize s = let c = nub $ sort s
  n = [1..(length c)]
  m = Map.fromList $ zip c n
  in map (\c - m Map.! c) s 

Prolog code

serialize(L,R) :- pairlists(L,R,A),arrange(A,T),
  numbered(T,1,N).
    ?  - typo?
pairlists([X|L],[Y|R],[pair(X,Y)|A]) :- pairlist(L,R,A).
pairlists([],[],[]). 

arrange([X|L],tree(T1,X,T2)) :- partition(L,X,L1,L2),
    arrange(L1,T1),
    arrange(L2,T2).
arrange([],_).

partition([X|L],X,L1,L2) :- partition(L,X,L1,L2).
partition([X|L],Y,[X|L1],L2) :- before(X,Y),
    partition(L,Y,L1,L2).
partition([X|L],Y,L1,[X|L2]) :- before(Y,X),
    partition(L,Y,L1,L2).
partition([],_,[],[]).

before(pair(X1,Y1),pair(X2,Y2)) :- X1X2.

numbered(tree(T1,pair(X,N1),T2),N0,N) :- numbered(T1,N0,N1),
 N2 is N1+1,
 numbered(T2,N2,N).
numbered(void,N,N).

Prolog examples
Execution:

?- serialize([p,r,o,l,o,g]).
   [4,5,3,2,3,1]
?- serialize ([i,n,t,.,a,r,t,i,f,i,c,i,a,l]).
  [5,7,9,1,2,8,9,5,4,5,3,5,2,6]




  ___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Comonadic composition and the game Doublets

2009-06-08 Thread Matthew

Today, I was working on coding a solver for the game doublets.
It's a word game where you transform the start word into the end
word one letter at a time (and the intermediate states must also
be valid words).  For example, one solution to (warm, cold) is

[warm, ward, card, cord, cold]

So, one of my first goals was to write a function that would generate
the possible words a given word could transition to.  Here's a simple
recursive implementation:

transition :: [Char] - [[Char]]
transition [] = []
transition (c:cs) = map (c:) (transition cs) ++ map (:cs) ['a'..'z']

For some reason, I find this formulation to be strangely unsatisfying.
It seems to me that this sort of computation -- i.e. modifying each
element of a container (but only one at a time) -- should be
expressible with some higher order function.  In a nutshell, what I'm
looking for would be a function, say each with this type.

each :: (Container c) = (a - a) - c a - c (c a)

I'm not particularly sure what Class to substitute for Container.
Comonad seemed like a promising solution initially, and provides
the basis for my current implementation of the doublets solver.

The problem being that cobind (extend) takes a function of type (w a - 
 a)

instead of (a - a).  I suspect that there may be a clever way to write
each by using liftW in combination with .  or something like that,
but presently, I'm at a loss.

Has anyone else encountered this sort of abstraction before.  I'd love
to hear your ideas about what Classes each should support, and
how to implement it.


Matt
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] com-1.2.3 - problems installing

2009-06-08 Thread GüŸnther Schmidt

Hi all,

I'm trying to install com-1.2.3 on Win XP with ghc-6.10.3

I get this error message:

Resolving dependencies...
Configuring com-1.2.3...
cabal: Missing dependencies on foreign libraries:
* Missing header file: include/WideStringSrc.h
* Missing C libraries: kernel32, user32, ole32, oleaut32, advapi32
This problem can usually be solved by installing the system packages that
provide these libraries (you may need the -dev versions). If the libraries
are already installed but in a non-standard location then you can use the
flags --extra-include-dirs= and --extra-lib-dirs= to specify where they are.
cabal: Error: some packages failed to install:
com-1.2.3 failed during the configure step. The exception was:
exit: ExitFailure 1

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Building network package on Windows

2009-06-08 Thread Bryan O'Sullivan
On Sun, Jun 7, 2009 at 5:04 PM, Iavor Diatchki iavor.diatc...@gmail.comwrote:

 Here is an update, in case anyone else runs into the same problem.


Thanks for following up. I wrote the code that performs that check, but
unfortunately I don't have access to all of the permutations of Windows that
are out there, so my ability to test is rather limited. I'm sorry for the
trouble it caused you. Perhaps Vista Home Basic doesn't have IPv6 support?
If that conjecture is true, I'm not sure how I'd have found it out :-( More
likely, the name mangling is going wrong.

As for your point that the network package exhibits different APIs depending
on the underlying system, that's true, but it's hard to avoid. Writing a
compatibility API for systems that don't have functioning IPv6 APIs is a
chunk of boring work, and I had thought that such systems were rare.

Anyway, please do file a bug, and we'll take the discussion of how to
reproduce and fix your problem there.

Thanks,
Bryan.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why do ghc-built binaries use timer_create?

2009-06-08 Thread Bryan O'Sullivan
On Mon, Jun 8, 2009 at 11:23 AM, Maurí­cio briqueabra...@yahoo.com wrote:

 This comes from an issue in haskell-beginner, although
 it have already been touched here. If you use recent
 versions of ghc to build a program and try the resulting
 binary on an old linux distro, you may get a message
 about timer_create receiving the wrong parameters.


For better or worse, this is something that people should not be trying in
the first place, and the problem you report is a representative example of
why. Taking code compiled on a new system and trying to run it on an old
system will often fail due to API or ABI incompatibilities. Sometimes those
failures are easy to see, as in your case; other times, they'll result in
more subtle problems. The timer_create issue is a bit of a red herring. If
it wasn't that, something else would probably break instead.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Slow documentation generation on Hackage

2009-06-08 Thread wren ng thornton

Thomas ten Cate wrote:

Niemeijer, R.A. wrote:
 which, face it, is going to be all of them; I doubt Haskell
 is popular enough yet to be the target of DoS attacks

Second that. I think this is a good case in which some security should
be traded in for usability.


Those who would trade security for usability deserve neither usability 
nor security ;)


Seriously, all the Haskell hackers I've encountered have been good 
people, but Haskell is the language of the hair shirt afterall. Security 
is hard enough to come by in the first place, sacrificing what little we 
have is not the right path. The Haskell interwebs are already too 
susceptible to downtimes from non-malicious sources, and it floods 
#haskell whenever it happens.


I agree that the turnaround time is a bit long, but I think server 
stability is more important than instant feedback. It's easy enough to 
get an account on community.haskell.org and just upload your own docs 
there[1]. The thing I'd be more interested in getting quick feedback on 
is whether compilation succeeds in the Hackage environment, which is 
very different from my own build environment.


Given the various constraints mentioned, and depending on the load 
averages for the servers, perhaps the simplest thing to do would be to 
just reduce the latency somewhat. Flushing the queue every 4~6 hrs seems 
both long enough to circumvent the major DoS problems, and short enough 
to be helpful to developers. Especially if the queue could be set up to 
be fair among users (e.g. giving each user some fixed number of slots or 
cycles per flush, delaying the rest until the next cycle). A different 
approach would be to do exponential slowdown per user. So if a user has 
submitted N jobs in the last Window (e.g. 24 hours), then the job is run 
around Epsilon^N after submission (where Epsilon is, say, 3 minutes).



[1] I do: http://community.haskell.org/~wren/ The scripts to automate it 
are trivial, but I can share them if people like.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Incremental XML parsing with namespaces?

2009-06-08 Thread Malcolm Wallace

On 8 Jun 2009, at 19:39, John Millikin wrote:

+ HaXml and hexpat seem to disregard namespaces entirely -- that is,
the root element is parsed to doc instead of
(org:myproject:mainns, doc), and the second child is x:ref
instead of (org:myproject:otherns, ref).


Yes, HaXml makes no special effort to deal with namespaces.  However,  
that does not mean that dealing with namespaces is impossible - it  
just requires a small amount of post-processing, that is all.


For instance, it would not be difficult to start from the SAX-like  
parser


http://hackage.haskell.org/packages/archive/HaXml/1.19.7/doc/html/Text-XML-HaXml-SAX.html

taking e.g. a constructor value
SaxElementOpen Name [Attribute]

and converting it to your corresponding constructor value
EventElementBegin Namespace LocalName [Attribute]

Just filter the [Attribute] of the first type for the attribute name  
xmlns, and pull that attribute value out to become your new  
Namespace value.


Obviously there is a bit more to it than that, since namespace  
*defining* attributes, like your example xmlns:x=..., have an  
lexical scope.  You will need some kind of state to track the scope,  
possibly in the parser itself, or again possibly in a post-processing  
step over the list of output XMLEvents.


Regards,
Malcolm

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Why do ghc-built binaries use timer_create?

2009-06-08 Thread Maurí­cio

This comes from an issue in haskell-beginner, (...)

For better or worse, this is something that people should not be trying 
in the first place, (...)


Sure! That's what I sugested in the original question. I'm actually
just curious on why timer_create is used at all. This is probably
just a small detail in program initialization, and maybe a link to
some description of what happens on program initialization (specially
ghc generated binaries) behind the naive user view would do it.

Thanks,
Maurício

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Incremental XML parsing with namespaces?

2009-06-08 Thread John Millikin
On Mon, Jun 8, 2009 at 1:44 PM, Malcolm
Wallacemalcolm.wall...@cs.york.ac.uk wrote:
 Yes, HaXml makes no special effort to deal with namespaces.  However, that
 does not mean that dealing with namespaces is impossible - it just
 requires a small amount of post-processing, that is all.

 For instance, it would not be difficult to start from the SAX-like parser

  http://hackage.haskell.org/packages/archive/HaXml/1.19.7/doc/html/Text-XML-HaXml-SAX.html

 taking e.g. a constructor value
    SaxElementOpen Name [Attribute]

 and converting it to your corresponding constructor value
    EventElementBegin Namespace LocalName [Attribute]

 Just filter the [Attribute] of the first type for the attribute name
 xmlns, and pull that attribute value out to become your new Namespace
 value.

 Obviously there is a bit more to it than that, since namespace *defining*
 attributes, like your example xmlns:x=..., have an lexical scope.  You
 will need some kind of state to track the scope, possibly in the parser
 itself, or again possibly in a post-processing step over the list of output
 XMLEvents.

The interface you linked to doesn't seem to have a way to resume
parsing. That is, I can't feed it chunks of text and have it generate
a (ParserState, [Event]) tuple for each chunk. Perhaps this is
possible in Haskell without explicit state management? I've tried to
write a test application to listen on a socket and print events as the
arrive, but with no luck.

Manually re-parsing the events isn't attractive, because it would
require writing at least part of the parser manually. I had hoped to
re-use an existing XML parser, rather than writing a new one.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Incremental XML parsing with namespaces?

2009-06-08 Thread John Millikin
On Mon, Jun 8, 2009 at 2:51 PM, wren ng thorntonw...@freegeek.org wrote:
 One issue you'll have to deal with is that since output is delivered
 on-line, by the time a validity (or well-formedness) error can be recognized
 by the parser it'll be too late. Thus the rest of your code will need to
 be able to back out any global changes they've done when an asynchronous
 error shows up.

 I'm sure you already knew that, but it's worth highlighting. (I recently
 wrote similar code, but in Java. Haven't had the chance/need to do any xml
 hacking in Haskell yet.)

This is acceptable -- I've written a small Python application that
performs incremental parsing on the data stream, and aborting on first
detection of an error is fine.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] How to compile base?

2009-06-08 Thread Henk-Jan van Tuyl


L.S.,

I tried to compile base-4.0.0.0 (on Windows XP) as follows:
  [...]\base\4.0.0.0runhaskell Setup configure
  command line: module `Prelude' is not loaded
It seems that Base needs another way to compile, how?

--
Regards,
Henk-Jan van Tuyl


--
http://functor.bamikanarie.com
http://Van.Tuyl.eu/
--


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Comonadic composition and the game Doublets

2009-06-08 Thread Dan Weston
I think the structure you are looking for is called a wedge sum [1], 
which is the coproduct in the category of the pointed spaces, each of 
which is (in this case) the group action of changing one letter to 
another in the ith position of a word of fixed length.


One small tricky part is that, in contrast to the direct product of n 
1-D spaces with a list comprehension, which enumerates the product space 
with duplicates:


[(x,y,z,...) | x - xs, y - ys, z - zs, ... ]

with a wedge sum a naive algorithm overcounts the point (or origin, in 
this case the identity function). This can be seen in your transition 
function, where non-identity transformations are counted only once, but 
the identity transformation is counted n times:


*Main length . filter (== abd) . transition $ abc
1
*Main length . filter (== abc) . transition $ abc
3

If you want your result to be a set, you may want to treat the identity 
transformation separately.


[1] http://en.wikipedia.org/wiki/Wedge_sum

Dan

Matthew wrote:

Today, I was working on coding a solver for the game doublets.
It's a word game where you transform the start word into the end
word one letter at a time (and the intermediate states must also
be valid words).  For example, one solution to (warm, cold) is

[warm, ward, card, cord, cold]

So, one of my first goals was to write a function that would generate
the possible words a given word could transition to.  Here's a simple
recursive implementation:

transition :: [Char] - [[Char]]
transition [] = []
transition (c:cs) = map (c:) (transition cs) ++ map (:cs) ['a'..'z']

For some reason, I find this formulation to be strangely unsatisfying.
It seems to me that this sort of computation -- i.e. modifying each
element of a container (but only one at a time) -- should be
expressible with some higher order function.  In a nutshell, what I'm
looking for would be a function, say each with this type.

each :: (Container c) = (a - a) - c a - c (c a)

I'm not particularly sure what Class to substitute for Container.
Comonad seemed like a promising solution initially, and provides
the basis for my current implementation of the doublets solver.

The problem being that cobind (extend) takes a function of type (w a - 
  a)

instead of (a - a).  I suspect that there may be a clever way to write
each by using liftW in combination with .  or something like that,
but presently, I'm at a loss.

Has anyone else encountered this sort of abstraction before.  I'd love
to hear your ideas about what Classes each should support, and
how to implement it.


Matt
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Comonadic composition and the game Doublets

2009-06-08 Thread Dan Weston
Oops. Make that: a list comprehension, which enumerates the product 
space *without* duplicates!


Dan Weston wrote:
I think the structure you are looking for is called a wedge sum [1], 
which is the coproduct in the category of the pointed spaces, each of 
which is (in this case) the group action of changing one letter to 
another in the ith position of a word of fixed length.


One small tricky part is that, in contrast to the direct product of n 
1-D spaces with a list comprehension, which enumerates the product space 
with duplicates:


[(x,y,z,...) | x - xs, y - ys, z - zs, ... ]

with a wedge sum a naive algorithm overcounts the point (or origin, in 
this case the identity function). This can be seen in your transition 
function, where non-identity transformations are counted only once, but 
the identity transformation is counted n times:


*Main length . filter (== abd) . transition $ abc
1
*Main length . filter (== abc) . transition $ abc
3

If you want your result to be a set, you may want to treat the identity 
transformation separately.


[1] http://en.wikipedia.org/wiki/Wedge_sum

Dan

Matthew wrote:

Today, I was working on coding a solver for the game doublets.
It's a word game where you transform the start word into the end
word one letter at a time (and the intermediate states must also
be valid words).  For example, one solution to (warm, cold) is

[warm, ward, card, cord, cold]

So, one of my first goals was to write a function that would generate
the possible words a given word could transition to.  Here's a simple
recursive implementation:

transition :: [Char] - [[Char]]
transition [] = []
transition (c:cs) = map (c:) (transition cs) ++ map (:cs) ['a'..'z']

For some reason, I find this formulation to be strangely unsatisfying.
It seems to me that this sort of computation -- i.e. modifying each
element of a container (but only one at a time) -- should be
expressible with some higher order function.  In a nutshell, what I'm
looking for would be a function, say each with this type.

each :: (Container c) = (a - a) - c a - c (c a)

I'm not particularly sure what Class to substitute for Container.
Comonad seemed like a promising solution initially, and provides
the basis for my current implementation of the doublets solver.

The problem being that cobind (extend) takes a function of type (w a - 
  a)

instead of (a - a).  I suspect that there may be a clever way to write
each by using liftW in combination with .  or something like that,
but presently, I'm at a loss.

Has anyone else encountered this sort of abstraction before.  I'd love
to hear your ideas about what Classes each should support, and
how to implement it.


Matt
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe






___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Comonadic composition and the game Doublets

2009-06-08 Thread Ryan Ingram
You can write this in terms of comonads if you have this additional function:

] focus :: (Container w) = (a - a) - (w a - w a)

with the idea that focus modifies the current point of the comonad
(the thing returned by extract) while leaving the rest alone.

] focus f w = something (f $ extract w)

For lists, this is easy:

 focus f (x:xs) = f x : xs

Then we can use comonadic extend to build the sublists:

 each :: (a - a) - [a] - [[a]]
 each = extend . focus

However, I think each element of the result might be focused on the
wrong target; we may need to be able to re-focus the lists at the
beginning.  (I haven't tested this code...)

  -- ryan

On Mon, Jun 8, 2009 at 12:24 PM, Matthewadn...@gmail.com wrote:
 Today, I was working on coding a solver for the game doublets.
 It's a word game where you transform the start word into the end
 word one letter at a time (and the intermediate states must also
 be valid words).  For example, one solution to (warm, cold) is

        [warm, ward, card, cord, cold]

 So, one of my first goals was to write a function that would generate
 the possible words a given word could transition to.  Here's a simple
 recursive implementation:

        transition :: [Char] - [[Char]]
        transition [] = []
        transition (c:cs) = map (c:) (transition cs) ++ map (:cs) ['a'..'z']

 For some reason, I find this formulation to be strangely unsatisfying.
 It seems to me that this sort of computation -- i.e. modifying each
 element of a container (but only one at a time) -- should be
 expressible with some higher order function.  In a nutshell, what I'm
 looking for would be a function, say each with this type.

        each :: (Container c) = (a - a) - c a - c (c a)

 I'm not particularly sure what Class to substitute for Container.
 Comonad seemed like a promising solution initially, and provides
 the basis for my current implementation of the doublets solver.

 The problem being that cobind (extend) takes a function of type (w a - a)
 instead of (a - a).  I suspect that there may be a clever way to write
 each by using liftW in combination with .  or something like that,
 but presently, I'm at a loss.

 Has anyone else encountered this sort of abstraction before.  I'd love
 to hear your ideas about what Classes each should support, and
 how to implement it.


 Matt
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Incremental XML parsing with namespaces?

2009-06-08 Thread Henning Thielemann
John Millikin schrieb:
 On Mon, Jun 8, 2009 at 1:44 PM, Malcolm

 The interface you linked to doesn't seem to have a way to resume
 parsing. That is, I can't feed it chunks of text and have it generate
 a (ParserState, [Event]) tuple for each chunk. Perhaps this is
 possible in Haskell without explicit state management? I've tried to
 write a test application to listen on a socket and print events as the
 arrive, but with no luck.
 
 Manually re-parsing the events isn't attractive, because it would
 require writing at least part of the parser manually. I had hoped to
 re-use an existing XML parser, rather than writing a new one.

I think you could use the parser as it is and do the name parsing later.
Due to lazy evaluation both parsers would run in an interleaved way.


But you may also be interested in:

http://hackage.haskell.org/packages/archive/wraxml/0.4.2/doc/html/Text-XML-WraXML-Tree-Tagchup.html

using
http://hackage.haskell.org/packages/archive/xml-basic/0.1/doc/html/Text-XML-Basic-Name-Qualified.html
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Deprecated packages on Hackage?

2009-06-08 Thread Erik de Castro Lopo
Hi all,

Is there a way to list all the deprecated packages on hackage?

For instance I found this:

   
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/control-timeout-0.1.2

via a google search but its not listed on the main hackage packages
page. However, there are other packages which are marked DEPRECATED
in their description strings.

Finally, if a package is deprecated it might be usefult to have
a reason as well so the hackage entry might say:

   Deprecated : true (replaced by package XXX)

or

   Deprecated : true (needs maintainer)

Cheers,
Erik
-- 
--
Erik de Castro Lopo
http://www.mega-nerd.com/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] A generics question

2009-06-08 Thread Henry Laxen
Lets suppose I have a file that has encoded things of different
types as integers, and now I would like to convert them back
into specific instances of a data type.  For example, I have a
file that contains 1,1,2,3 and I would like the output to be
[Red, Red, Green, Blue]. I also would like to do this
generically, so that if I wanted to convert the same list of
integers into say Sizes, I would get [Small, Small, Medium,
Large]  Now please have a look at the following code:

{-# LANGUAGE DeriveDataTypeable #-}
import Data.Generics
data Color = Red | Green | Blue deriving (Eq,Ord,Read,Show,Typeable,Data)
data Size  = Small | Mediaum | Large deriving (Eq,Ord,Read,Show,Typeable,Data)
g = Green

c = undefined :: Color
s = undefined :: Size

t = do
  print $   toConstr g  -- Green
  print $ dataTypeOf c  -- DataType {tycon = Main.Color, datarep = AlgRep
[Red,Green,Blue]}

convert :: (Data a, Data b) =Int -a -b
convert i x =
  let c = dataTypeConstrs (dataTypeOf x) !! (i-1)
  in fromConstr c


I would like to be able to say: x = convert 1 c and have it
assign Red to x then I would like to say: y = convert 1 s and
have it assign Small to y, however, when I try that I get:

Ambiguous type variable `b' in the constraint:
  `Data b' arising from a use of `convert' at interactive:1:8-18
Probable fix: add a type signature that fixes these type variable(s)

Of course if I say x :: Color = convert 1 c, it works, but I
would like to avoid that if possible, as all of the information
is already contained in the parameter c.  Is there any way to do
this?  Thanks in advance for your wise counsel.

Best wishes,
Henry Laxen


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Slow documentation generation on Hackage

2009-06-08 Thread Duncan Coutts
On Mon, 2009-06-08 at 11:24 +0200, Ketil Malde wrote:
 Niemeijer, R.A. r.a.niemei...@tue.nl writes:
 
  If that is the main concern, would the following not work?
   
   [...]
 
  Result: immediate documentation for every contributor with good
  intentions

Having the server generate docs itself would be regressing towards a
worse design. The server should just manage upload/download, storage and
management of information. It should not be running builds and
generating docs.

 Or simply, on upload, generate the doc directory with a temporary page
 saying that documentation will arrive when it's good and ready?

And use a design where there isn't just a single build client, like the
design for the new hackage-server. Any authorised client should be able
to upload docs. That should include the package maintainer as well as
authorised build bots. Then we can easily adjust the time between
package upload and documentation generation without having to tell the
server anything.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Deprecated packages on Hackage?

2009-06-08 Thread Ross Paterson
On Tue, Jun 09, 2009 at 08:54:17AM +1000, Erik de Castro Lopo wrote:
 Is there a way to list all the deprecated packages on hackage?

Not unless you have an account on that machine.  They're hiding, because
their maintainers wanted them withdrawn.

 Finally, if a package is deprecated it might be usefult to have
 a reason as well so the hackage entry might say:
 
Deprecated : true (replaced by package XXX)
 
 or
 
Deprecated : true (needs maintainer)

The former exists, and is used by the following:

http-shed - httpd-shed
ieee754-parser - data-binary-ieee754
Thingie - Hieroglyph
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Building network package on Windows

2009-06-08 Thread Iavor Diatchki
Hi,
As Thomas pointed out, it is not clear if this is a bug, or if there
is something confused between the different versions of Windows and
MinGW (or I just did something wrong) but I'll make a ticket so that
we can track the issue.  I am by no means a Windows developer but I
would be happy to try out fixes/ideas on my Windows machine as I think
that it is important that we have as good support for Windows as we do
on the various Unix-like systems.
-Iavor

On Mon, Jun 8, 2009 at 1:23 PM, Bryan O'Sullivanb...@serpentine.com wrote:
 On Sun, Jun 7, 2009 at 5:04 PM, Iavor Diatchki iavor.diatc...@gmail.com
 wrote:

 Here is an update, in case anyone else runs into the same problem.

 Thanks for following up. I wrote the code that performs that check, but
 unfortunately I don't have access to all of the permutations of Windows that
 are out there, so my ability to test is rather limited. I'm sorry for the
 trouble it caused you. Perhaps Vista Home Basic doesn't have IPv6 support?
 If that conjecture is true, I'm not sure how I'd have found it out :-( More
 likely, the name mangling is going wrong.

 As for your point that the network package exhibits different APIs depending
 on the underlying system, that's true, but it's hard to avoid. Writing a
 compatibility API for systems that don't have functioning IPv6 APIs is a
 chunk of boring work, and I had thought that such systems were rare.

 Anyway, please do file a bug, and we'll take the discussion of how to
 reproduce and fix your problem there.

 Thanks,
 Bryan.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Slow documentation generation on Hackage

2009-06-08 Thread Ross Paterson
On Mon, Jun 08, 2009 at 04:36:14AM -0400, Brandon S. Allbery KF8NH wrote:
 Additionally, I *think* haddock is run as part of the automated build  
 tests, which (again) happen on a regular schedule instead of being  
 triggered by uploads to avoid potential denial of service attacks.

That's correct.  One workaround is to upload your package just before
0:00, 6:00, 12:00 or 18:00 UK time.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A generics question

2009-06-08 Thread Jason Dagit
On Mon, Jun 8, 2009 at 4:10 PM, Henry Laxen nadine.and.he...@pobox.comwrote:

 Lets suppose I have a file that has encoded things of different
 types as integers, and now I would like to convert them back
 into specific instances of a data type.  For example, I have a
 file that contains 1,1,2,3 and I would like the output to be
 [Red, Red, Green, Blue]. I also would like to do this
 generically, so that if I wanted to convert the same list of
 integers into say Sizes, I would get [Small, Small, Medium,
 Large]  Now please have a look at the following code:

 {-# LANGUAGE DeriveDataTypeable #-}
 import Data.Generics
 data Color = Red | Green | Blue deriving (Eq,Ord,Read,Show,Typeable,Data)
 data Size  = Small | Mediaum | Large deriving
 (Eq,Ord,Read,Show,Typeable,Data)


What about making both of these instances of Enum instead of using Data and
Typeable?

You'd get fromEnum and toEnum.  Which I think, would give you the int
mapping that you are after.

fromEnum :: Enum a = a - Int
toEnum :: Enum a = Int - a

Jason
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Applying Data.Map

2009-06-08 Thread Toby Hutton
Although in this example using Data.Map is overkill, if the alphabet was
very large then Data.Map probably would be the way to go. In that case I'd
use:

map head . group . sort instead of nub . sort

since it's noticeably quicker for large lists.  This is because nub needs to
preserve the order of input, removing redundancies, but you're sorting it
anyway.

Also, in map (\c - m Map.! c) s you can use the 'section' (m
Map.!)instead.  e.g., map
(m Map.!) s

The Map.! is ugly though.  As you're only using fromList and (!) from
Data.Map, I'd just import those explicitly since they don't clash with
Prelude.  Then you'd have map (m !) s

Toby.


On Tue, Jun 9, 2009 at 4:59 AM, michael rice nowg...@yahoo.com wrote:

 I wrote a Haskell solution for the Prolog problem stated below. I had
 written a function SQUISH before discovering that NUB does the same thing.
 While the solution works, I thought maybe I could apply some functions in
 the Data.Map module, and so wrote a second version of SERIALIZE, one no
 longer needing TRANSLATE. Using the Data.Map module is probably overkill for
 this particular problem, but wanted to familiarize myself with Map type.
 Suggestions welcome. Prolog code also included below for those interested.

 Michael

 ===

 {-
  From Prolog By Example, Coelho, Cotta, Problem 42, pg. 63

Verbal statement:
Generate a list of serial numbers for the items of a given list,
the members of which are to be numbered in alphabetical order.

For example, the list [p,r,o,l,o,g] must generate [4,5,3,2,3,1]
 -}

 {-
 Prelude :l serialize
 [1 of 1] Compiling Main ( serialize.hs, interpreted )
 Ok, modules loaded: Main.
 *Main serialize prolog
 [4,5,3,2,3,1]
 *Main
 -}

 ===Haskell code==

 import Data.Char
 import Data.List
 import Data.Map (Map)
 import qualified Data.Map as Map

 {-
 translate :: [Char] - [(Char,Int)] - [Int]
 translate [] _ = []
 translate (x:xs) m = (fromJust (lookup x m)) : (translate xs m )
 -}

 {-
 serialize :: [Char] - [Int]
 serialize s = let c = nub $ sort s
   n = [1..(length c)]
   in translate s (zip c n)
 -}

 serialize :: [Char] - [Int]
 serialize s = let c = nub $ sort s
   n = [1..(length c)]
   m = Map.fromList $ zip c n
   in map (\c - m Map.! c) s

 Prolog code

 serialize(L,R) :- pairlists(L,R,A),arrange(A,T),
   numbered(T,1,N).
 ?  - typo?
 pairlists([X|L],[Y|R],[pair(X,Y)|A]) :- pairlist(L,R,A).
 pairlists([],[],[]).

 arrange([X|L],tree(T1,X,T2)) :- partition(L,X,L1,L2),
 arrange(L1,T1),
 arrange(L2,T2).
 arrange([],_).

 partition([X|L],X,L1,L2) :- partition(L,X,L1,L2).
 partition([X|L],Y,[X|L1],L2) :- before(X,Y),
 partition(L,Y,L1,L2).
 partition([X|L],Y,L1,[X|L2]) :- before(Y,X),
 partition(L,Y,L1,L2).
 partition([],_,[],[]).

 before(pair(X1,Y1),pair(X2,Y2)) :- X1X2.

 numbered(tree(T1,pair(X,N1),T2),N0,N) :- numbered(T1,N0,N1),
  N2 is N1+1,
  numbered(T2,N2,N).
 numbered(void,N,N).

 Prolog examples
 Execution:

 ?- serialize([p,r,o,l,o,g]).
[4,5,3,2,3,1]
 ?- serialize ([i,n,t,.,a,r,t,i,f,i,c,i,a,l]).
   [5,7,9,1,2,8,9,5,4,5,3,5,2,6]



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Incremental XML parsing with namespaces?

2009-06-08 Thread John Millikin
On Mon, Jun 8, 2009 at 3:39 PM, Henning
Thielemannlemm...@henning-thielemann.de wrote:
 I think you could use the parser as it is and do the name parsing later.
 Due to lazy evaluation both parsers would run in an interleaved way.

I've been trying to figure out how to get this to work with lazy
evaluation, but haven't made much headway. Tips? The only way I can
think of to get incremental parsing working is to maintain explicit
state, but I also can't figure out how to achieve this with the
parsers I've tested (HaXml, HXT, hexpat).

Here's a working example of what I'm trying to do, in Python. It reads
XML from stdin, prints events as they are parsed, and will terminate
when the document ends:

##

from xml.sax import handler, saxutils, expatreader

class ContentHandler (handler.ContentHandler):
def __init__ (self):
self.events = []
self.level = 0

def startElementNS (self, ns_name, lname, attrs):
self.events.append ((BEGIN, ns_name, lname, dict (attrs)))
self.level += 1

def endElementNS (self, ns_name, lname):
self.events.append ((END, ns_name, lname))
self.level -= 1

def characters (self, content):
self.events.append ((TEXT, content))

def main ():
parser = expatreader.ExpatParser ()
content = ContentHandler ()
parser.setFeature (handler.feature_namespaces, True)
parser.setContentHandler (content)
got_events = False
while content.level  0 or (not got_events):
text = raw_input (Enter XML:\n)
parser.feed (text)
print content.events
content.events = []
got_events = True

if __name__ == __main__: main()

###

$ python incremental.py
Enter XML:
test xmlns=urn:testtest2test3
[('BEGIN', (u'urn:test', u'test'), u'test', {}), ('BEGIN',
(u'urn:test', u'test2'), u'test2', {}), ('BEGIN', (u'urn:test',
u'test3'), u'test3', {})]
Enter XML:
/test3/test2test2 a=b/text content goes here
[('END', (u'urn:test', u'test3'), None), ('END', (u'urn:test',
u'test2'), None), ('BEGIN', (u'urn:test', u'test2'), u'test2', {(None,
u'a'): u'b'}), ('END', (u'urn:test', u'test2'), None), ('TEXT', u'text
content goes here')]
Enter XML:
/test
[('END', (u'urn:test', u'test'), None)]

#

As demonstrated, the parser retains state (namespaces, nesting)
between text inputs. Are there any XML parsers for Haskell that
support this incremental behavior?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A generics question

2009-06-08 Thread Sterling Clover

On Jun 8, 2009, at 7:10 PM, Henry Laxen wrote:


convert :: (Data a, Data b) =Int -a -b
convert i x =
  let c = dataTypeConstrs (dataTypeOf x) !! (i-1)
  in fromConstr c

I would like to be able to say: x = convert 1 c and have it
assign Red to x then I would like to say: y = convert 1 s and
have it assign Small to y, however, when I try that I get:

Ambiguous type variable `b' in the constraint:
  `Data b' arising from a use of `convert' at interactive:1:8-18
Probable fix: add a type signature that fixes these type  
variable(s)


Of course if I say x :: Color = convert 1 c, it works, but I
would like to avoid that if possible, as all of the information
is already contained in the parameter c.  Is there any way to do
this?  Thanks in advance for your wise counsel.

Best wishes,
Henry Laxen



The type signature for 'convert' is throwing away the information you  
want.


Try it with the following type signature and it should work fine:

convert :: (Data a) = Int - a - a

Of course, as has been noted, SYB is a rather big sledgehammer for  
the insect in question.


Cheers,
S.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: StrictBench 0.1 - Benchmarking code through strict evaluation

2009-06-08 Thread Sterling Clover


On Jun 8, 2009, at 6:58 AM, Magnus Therning wrote:

Is there no way to force repeated evaluation of a pure value?  (It'd
be nice to be able to perform time measurements on pure code so that
it's possible to compare Haskell implementations of algorithms to
implementations in other languages, without running into confounding
factors.)



This perhaps introduces too much inefficiency, but one trick is to  
pack the computation into an existential.

i.e.
calculate :: Floating b = (forall c. Floating c = c) - b
calculate = id

This method is used to evaluate the same numeric formula with  
different rounding modes in ieee-utils.


http://hackage.haskell.org/packages/archive/ieee-utils/0.4.0/doc/html/ 
src/Numeric-IEEE-Monad.html#perturb


Cheers,
S.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Deprecated packages on Hackage?

2009-06-08 Thread Erik de Castro Lopo
Ross Paterson wrote:

 On Tue, Jun 09, 2009 at 08:54:17AM +1000, Erik de Castro Lopo wrote:
  Is there a way to list all the deprecated packages on hackage?
 
 Not unless you have an account on that machine.  They're hiding, because
 their maintainers wanted them withdrawn.

Well there is at least one package (network-dns) where the maintainter
doesn't want to maintain it any more but would be happy for someone
else to take it over.

It would be nice if something like this could be represented in the
package metadata.

Erik
-- 
--
Erik de Castro Lopo
http://www.mega-nerd.com/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A generics question

2009-06-08 Thread Stefan Holdermans

Henry,

Jason pointed out:

You'd get fromEnum and toEnum.  Which I think, would give you the  
int mapping that you are after.


fromEnum :: Enum a = a - Int
toEnum :: Enum a = Int - a


To me, this would indeed seem the way to go for your particular example.

Moreover, as for generic producer functions in general, the pattern  
suggested by the Prelude would be to have


  c :: Color
  c = undefined

  convert :: Data a = Int - a
  convert i x =
let c = dataTypeConstrs (dataTypeOf x) !! (i-1)
in  fromConstr c

and then use it as in

  convert 1 `asTypeOf` c

You'll find out that in most cases the (pseudo) type annotation  
isn't really needed and the type of the value to produce can be  
determined automatically by the context.


Cheers,

  Stefan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A generics question

2009-06-08 Thread Stefan Holdermans

Henry,

Ah, pressed send way to early. Of course, the definition should change  
a little as well:


 convert :: Data a = Int - a
 convert i = xwhere
 x = fromConstr ( dataTypeConstrs (dataTypeOf x) !! (i - 1) )

Cheers,

  Stefan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Building network package on Windows

2009-06-08 Thread Iavor Diatchki
Hi,
OK, I think that I found and fixed the problem.  As Thomas pointed
out, the configure script is not wrong.  The problem turned out to be
the foreign import for getnameinfo (this was the missing symbol).
Attached to this e-mail should be a darcs patch that fixes the
problem.
-Iavor


On Mon, Jun 8, 2009 at 4:48 PM, Iavor Diatchkiiavor.diatc...@gmail.com wrote:
 Hi,
 As Thomas pointed out, it is not clear if this is a bug, or if there
 is something confused between the different versions of Windows and
 MinGW (or I just did something wrong) but I'll make a ticket so that
 we can track the issue.  I am by no means a Windows developer but I
 would be happy to try out fixes/ideas on my Windows machine as I think
 that it is important that we have as good support for Windows as we do
 on the various Unix-like systems.
 -Iavor

 On Mon, Jun 8, 2009 at 1:23 PM, Bryan O'Sullivanb...@serpentine.com wrote:
 On Sun, Jun 7, 2009 at 5:04 PM, Iavor Diatchki iavor.diatc...@gmail.com
 wrote:

 Here is an update, in case anyone else runs into the same problem.

 Thanks for following up. I wrote the code that performs that check, but
 unfortunately I don't have access to all of the permutations of Windows that
 are out there, so my ability to test is rather limited. I'm sorry for the
 trouble it caused you. Perhaps Vista Home Basic doesn't have IPv6 support?
 If that conjecture is true, I'm not sure how I'd have found it out :-( More
 likely, the name mangling is going wrong.

 As for your point that the network package exhibits different APIs depending
 on the underlying system, that's true, but it's hard to avoid. Writing a
 compatibility API for systems that don't have functioning IPv6 APIs is a
 chunk of boring work, and I had thought that such systems were rare.

 Anyway, please do file a bug, and we'll take the discussion of how to
 reproduce and fix your problem there.

 Thanks,
 Bryan.




patch
Description: Binary data
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: I love purity, but it's killing me.

2009-06-08 Thread Paul L
Interpreting lambda calculus is neither cheap or efficient, otherwise
we wouldn't all be using compilers :-)

By interpretive overhead of adding Let/Rec/LetRec to an object
language I mean the need to introduce variables, scoping, and
environment (mapping variables to either values or structures they
bind to) during interpretations, which are otherwise not needed in the
object language. I can't show you how I can do better because I don't
have a solution. The open question is whether there exists such a
solution that's both elegant and efficient at maintain proper sharing
in the object language.

We certainly can get rid of all interpretive overheads by either
having a tagless interpreter (as in Oleg and Shan's paper), or by
direct compilation. But so far I don't see how a tagless interpreter
could handle sharing when it can't be distinguished in the host
language.

One would argue that the overhead of variables (and the environment
associated with them) can be avoided by having a higher order syntax,
but that has its own problem. Let me illustrate with a data structure
that uses higher order Rec.

  data C a
= Val a
| ...
| Rec (C a - C a)

  val :: C a - a
  val (Val x) = x
  val ...
  val (Rec f) = val (fix f) where fix f = f (fix f)

  update :: C a - C a
  update (val x) = ...
  update ...
  update (Rec f) = Rec (\x - ...)

The problem is right there in the creation of a new closure during
update (Rec f).
Haskell would not evaluate under lambda, and repeated updates will inevitably
result in space and time leaks.

-- 
Regards,
Paul Liu

Yale Haskell Group
http://www.haskell.org/yale

On 6/6/09, Chung-chieh Shan ccs...@post.harvard.edu wrote:
 On 2009-05-27T03:58:58-0400, Paul L wrote:
 One possible solution is to further introduce a fixed point data
 constructor, a Rec or even LetRec to explicitly capture cycles. But
 then you still incur much overheads interpreting them,

 I don't understand this criticism -- what interpretive overhead do you
 mean?  Certainly the Rec/LetRec encoding is pretty efficient for one
 object language with cycles, namely the lambda calculus with Rec or
 LetRec. :)

 One concrete way for you to explain what interpretive overhead you mean,
 if it's not too much trouble, might be to compare a Rec/LetRec encoding
 of a particular object language to another encoding that does not have
 the interpretive overhead you mean and is therefore more efficient.

 --
 Edit this signature at http://www.digitas.harvard.edu/cgi-bin/ken/sig
 We want our revolution, and we want it now! -- Marat/Sade
 We want our revolution, and we'll take it at such time as
  you've gotten around to delivering it  -- Haskell programmer

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe