Re: [GHC] #1449: Bug in instance MonadFix []

2007-06-22 Thread GHC
#1449: Bug in instance MonadFix []
---+
Reporter:  [EMAIL PROTECTED]  |Owner:
Type:  bug |   Status:  closed
Priority:  normal  |Milestone:
   Component:  libraries/base  |  Version:  6.6.1 
Severity:  normal  |   Resolution:  invalid   
Keywords:  |   Difficulty:  Unknown   
  Os:  Unknown | Testcase:  mfix (([1,2]++).(:[]))
Architecture:  Unknown |  
---+
Comment (by ross):

 Typo: left sliding should have been left tightening.

 Regarding the `Show Char` instance, if there's a bug it's in the Haskell
 98 Report, as the implementations follow it faithfully in this case.

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/1449
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #1444: Template Haskell: add proper support for qualified names in non-splicing applications

2007-06-22 Thread GHC
#1444: Template Haskell: add proper support for qualified names in non-splicing
applications
-+--
Reporter:  SamB  |Owner: 
Type:  feature request   |   Status:  new
Priority:  normal|Milestone: 
   Component:  Template Haskell  |  Version:  6.6.1  
Severity:  normal|   Resolution: 
Keywords:|   Difficulty:  Unknown
  Os:  Unknown   | Testcase: 
Architecture:  Unknown   |  
-+--
Comment (by simonpj):

 You'd refer to `GHC.Base.map` or whereever `map` is actually defined.

 The problem you are trying to solve is this: you want Derive to print out
 a Haskell program that mentions `map` and, when compiled, will get the
 right `map`.   The only way I know to do that for sure is

 a) print out `GHC.Base.map` and import `GHC.Base` (or whatever the right
 module is; Dervive, being a TH program, can find out easily.

 b) or add some new lexical syntax so you can `{-# ORIG #-} GHC.Base.map`
 as I suggested earlier.

 The advantage of (a) is that can do it right away, without waiting for
 some not-yet-designed feature to be implemented.

 S

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/1444
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


RE: un-existing import in boot-module

2007-06-22 Thread Simon Peyton-Jones
Hi Serge

Can you give us a test case?  And if so, submit a Trac bug report?

It works ok for me (see my example below), so I can't do anything without more 
info.

Simon

sh-2.04$ cat D.hs-boot
module D where
import BadModule

data X

sh-2.04$ ghc --version
The Glorious Glasgow Haskell Compilation System, version 6.6.1

sh-2.04$ ghc -c D.hs-boot

D.hs-boot:2:0:
Failed to load interface for `BadModule':
  Use -v to see a list of the files searched for.
sh-2.04$



| -Original Message-
| From: [EMAIL PROTECTED] [mailto:glasgow-haskell-
| [EMAIL PROTECTED] On Behalf Of Serge D. Mechveliani
| Sent: 21 June 2007 17:04
| To: glasgow-haskell-bugs@haskell.org
| Subject: un-existing import in boot-module
|
| Dear GHC developers,
|
| ghc-6.6.1  does not report about unknown module import in boot-modules.
|
| For example, let us haveFoo.hs  and  Foo.hs-boot,
| and add to   Foo.hs-boot  the line
|
|   import M
|
| , where  M  is not a name of any existing module.
| ghc  `makes' the project without any report of this `M'.
|
| -
| Serge Mechveliani
| [EMAIL PROTECTED]
| ___
| Glasgow-haskell-bugs mailing list
| Glasgow-haskell-bugs@haskell.org
| http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs
___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


[GHC] #1451: Provide way to show the origin of a constraint

2007-06-22 Thread GHC
#1451: Provide way to show the origin of a constraint
+---
  Reporter:  [EMAIL PROTECTED]  |  Owner: 
  Type:  feature request| Status:  new
  Priority:  normal |  Milestone: 
 Component:  Compiler   |Version:  6.6.1  
  Severity:  normal |   Keywords: 
Difficulty:  Unknown| Os:  Unknown
  Testcase: |   Architecture:  Unknown
+---
For a complex type (A b, C d, E e) = Something a b e - Int, provide a
 way to given the query:
 where does A b come from? Respond with the line number of a function that
 causes that constraint. This should of course also work for non-Haskell 98
 constraints.

 This issue comes up when one by accident calls a function in the wrong
 layer of a monadic transformer stack.

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/1451
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #1448: Segmentation fault with readline Module

2007-06-22 Thread GHC
#1448: Segmentation fault with readline Module
-+--
Reporter:  [EMAIL PROTECTED]  |Owner: 
Type:  bug   |   Status:  closed 
Priority:  high  |Milestone: 
   Component:  Compiler  |  Version:  6.6
Severity:  critical  |   Resolution:  invalid
Keywords:  Segmentation Fault|   Difficulty:  Unknown
  Os:  Multiple  | Testcase:  yes
Architecture:  Multiple  |  
-+--
Changes (by simonmar):

  * resolution:  = invalid
  * status:  new = closed

Comment:

 You need to call `System.Console.Readline.initialize` before doing
 anything else.

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/1448
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #1447: Improve code generation for dense switches

2007-06-22 Thread GHC
#1447: Improve code generation for dense switches
-+--
Reporter:  simonpj   |Owner:  
Type:  task  |   Status:  new 
Priority:  normal|Milestone:  _|_ 
   Component:  Compiler  |  Version:  6.6.1   
Severity:  normal|   Resolution:  
Keywords:|   Difficulty:  Moderate (1 day)
  Os:  Unknown   | Testcase:  
Architecture:  Unknown   |  
-+--
Comment (by simonmar):

 I have a feeling the HEAD should be better here, I fixed some bugs in the
 switch compilation post 6.6 while working on tagging.  If you could try
 out the example with a recent GHC snapshot, that would be useful.  Or post
 some actual code so that we can reproduce.

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/1447
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #1452: undefined reference while linking executable (for cyclic import project) in ghc-6.6.1

2007-06-22 Thread GHC
#1452: undefined reference while linking executable (for cyclic import 
project)
in ghc-6.6.1
--+-
Reporter:  guest  |Owner:  
Type:  bug|   Status:  new 
Priority:  normal |Milestone:  
   Component:  Compiler   |  Version:  6.6.1   
Severity:  normal |   Resolution:  
Keywords:  cyclic import  |   Difficulty:  Moderate (1 day)
  Os:  Linux  | Testcase:  
Architecture:  x86|  
--+-
Changes (by [EMAIL PROTECTED]):

  * difficulty:  Unknown = Moderate (1 day)
  * keywords:  = cyclic import

Comment:

 The bug is easy to find. 18 .hs modules, each value (except 1 small
 function) is
 trivially defined as (\ _ _ ... - error ). Its complexity is only in
 many imported types
 and many imported values, with a bit of cyclic import.

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/1452
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


[GHC] #1452: undefined reference while linking executable (for cyclic import project) in ghc-6.6.1

2007-06-22 Thread GHC
#1452: undefined reference while linking executable (for cyclic import 
project)
in ghc-6.6.1
---+
  Reporter:  guest |  Owner:   
  Type:  bug   | Status:  new  
  Priority:  normal|  Milestone:   
 Component:  Compiler  |Version:  6.6.1
  Severity:  normal|   Keywords:   
Difficulty:  Unknown   | Os:  Linux
  Testcase:|   Architecture:  x86  
---+
http://www.botik.ru/pub/local/Mechveliani/ghcBugs/importCycleBug.zip

 Unzip and see  install.txt.

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/1452
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #1451: Provide way to show the origin of a constraint

2007-06-22 Thread GHC
#1451: Provide way to show the origin of a constraint
--+-
Reporter:  [EMAIL PROTECTED]  |Owner: 
Type:  feature request|   Status:  new
Priority:  normal |Milestone: 
   Component:  Compiler   |  Version:  6.6.1  
Severity:  normal |   Resolution: 
Keywords: |   Difficulty:  Unknown
  Os:  Unknown| Testcase: 
Architecture:  Unknown|  
--+-
Comment (by simonpj):

 If you could give a concrete example of what you have in mind, it'd help.
 That is, exactly what do you want the programmer to be able to do or say?
 GHC does indeed report the origins for constraints in error messages, for
 example.

 S

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/1451
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #1452: undefined reference while linking executable (for cyclic import project) in ghc-6.6.1

2007-06-22 Thread GHC
#1452: undefined reference while linking executable (for cyclic import 
project)
in ghc-6.6.1
--+-
Reporter:  guest  |Owner:  [EMAIL PROTECTED]
Type:  bug|   Status:  new 
Priority:  normal |Milestone:  
   Component:  Compiler   |  Version:  6.6.1   
Severity:  normal |   Resolution:  
Keywords:  cyclic import  |   Difficulty:  Moderate (1 day)
  Os:  Linux  | Testcase:  
Architecture:  x86|  
--+-
Changes (by guest):

  * owner:  = [EMAIL PROTECTED]

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/1452
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #1452: undefined reference while linking executable (for cyclic import project) in ghc-6.6.1

2007-06-22 Thread GHC
#1452: undefined reference while linking executable (for cyclic import 
project)
in ghc-6.6.1
--+-
Reporter:  guest  |Owner:  guest   
Type:  bug|   Status:  assigned
Priority:  normal |Milestone:  
   Component:  Compiler   |  Version:  6.6.1   
Severity:  normal |   Resolution:  
Keywords:  cyclic import  |   Difficulty:  Moderate (1 day)
  Os:  Linux  | Testcase:  
Architecture:  x86|  
--+-
Changes (by guest):

  * status:  new = assigned
  * owner:  [EMAIL PROTECTED] = guest

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/1452
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


[GHC] #1453: addition to #1452 : it ignores import UnknownModule

2007-06-22 Thread GHC
#1453: addition to  #1452  : it ignores  import UnknownModule
---+
  Reporter:  [EMAIL PROTECTED]  |  Owner:
  Type:  bug   | Status:  new   
  Priority:  normal|  Milestone:
 Component:  Compiler  |Version:  6.6.1 
  Severity:  normal|   Keywords:  unknown module
Difficulty:  Unknown   | Os:  Linux 
  Testcase:  ticket #1452  |   Architecture:  x86   
---+
See ticket #1452.
 Prepend the line   `import Unknown'   to the import decls in  Reduce.hs-
 boot  and command
 make build.
 It builds without report of unknown module.

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/1453
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


[GHC] #1454: Confusing or bug: :t and :i don't return same type

2007-06-22 Thread GHC
#1454: Confusing or bug: :t and :i don't return same type
+---
  Reporter:  [EMAIL PROTECTED]  |  Owner: 
  Type:  bug| Status:  new
  Priority:  normal |  Milestone: 
 Component:  GHCi   |Version:  6.6.1  
  Severity:  normal |   Keywords: 
Difficulty:  Unknown| Os:  Unknown
  Testcase: |   Architecture:  Unknown
+---
:t function and :i function don't return the same type for complex
 types. Is this intentional?

 If it's not, it's a bug. :i returns the more general type, btw.

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/1454
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #1451: Provide way to show the origin of a constraint

2007-06-22 Thread GHC
#1451: Provide way to show the origin of a constraint
--+-
Reporter:  [EMAIL PROTECTED]  |Owner: 
Type:  feature request|   Status:  new
Priority:  normal |Milestone: 
   Component:  Compiler   |  Version:  6.6.1  
Severity:  normal |   Resolution: 
Keywords: |   Difficulty:  Unknown
  Os:  Unknown| Testcase: 
Architecture:  Unknown|  
--+-
Comment (by [EMAIL PROTECTED]):

 Ok, given a complete module that compiles, I would ideally like to do ask
 where the constraint MonadState (A b) (ST s) for example is generated.
 Since there doesn't exist such an instance, it's a bug in my program, but
 GHC only starts to complain when I call a function that needs that
 instance, but pointing to the location of the call is of little use in
 this case. The information that it needs MonadState (A b) (ST s) typically
 comes from GHC itself already, but me tracking down where exactly it gets
 used in that way(e.g. where the call to put or get is) is still my job.

 An IDE ideally would show in what layer of the monad transformer stack
 everything is executed, making finding such bugs trivial, instead of hard.
 Such an IDE will probably not exist for 20 years or it might not ever
 exist, but GHC providing such a feature would be a start.

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/1451
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


[GHC] #1455: Wildcard pattern _ generates defaulting warning

2007-06-22 Thread GHC
#1455: Wildcard pattern _ generates defaulting warning
---+
  Reporter:  guest |  Owner: 
  Type:  bug   | Status:  new
  Priority:  low   |  Milestone: 
 Component:  Compiler  |Version:  6.6
  Severity:  trivial   |   Keywords:  wildcard defaulting warning
Difficulty:  Unknown   | Os:  MacOS X
  Testcase:|   Architecture:  x86
---+
The ''second'' use of properFraction generates a Defaulting warning, even
 though the Integral value in question is being thrown away:

 {{{
 module Main where

 r :: Double
 r = pi

 main :: IO ()
 main =
let (_,x) = properFraction r :: (Integer,Double)
(_,y) = properFraction r
in  putStrLn $ show (x,y)
 }}}

 It is clearly trivial to work around this issue. However, if one takes the
 point of view that all of one's final code should compile using the -Wall
 -Werror flags, then the work around confirms dynamic typers' worst fears
 about Haskell. One shouldn't have to say anything about a value that's
 being thrown away; the compiler should make the most efficient call what
 to do, not e.g. some tightly wound mathematician who believes that Integer
 is the only valid Integral type.

 Here is the compiler output:

 {{{
 % ghc -v -Wall -Werror Bug.hs
 Glasgow Haskell Compiler, Version 6.6, for Haskell 98, compiled by GHC
 version 6.6
 Using package config file: /usr/local/lib/ghc-6.6/package.conf
 wired-in package base mapped to base-2.0
 wired-in package rts mapped to rts-1.0
 wired-in package haskell98 mapped to haskell98-1.0
 wired-in package template-haskell mapped to template-haskell-2.0
 Hsc static flags: -static
 Created temporary directory: /tmp/ghc12130_0
 *** Checking old interface for main:Main:
 *** Parser:
 *** Renamer/typechecker:

 Bug.hs:9:15:
 Warning: Defaulting the following constraint(s) to type `Integer'
  `Integral t' arising from use of `properFraction' at Bug.hs:9
 :15-30
  In the expression: properFraction r
  In a pattern binding: (_, y) = properFraction r
  In the expression:
  let
(_, x) = properFraction r :: (Integer, Double)
(_, y) = properFraction r
  in putStrLn $ (show (x, y))
 *** Deleting temp files:
 Deleting: /tmp/ghc12130_0/ghc12130_0.s
 Warning: deleting non-existent /tmp/ghc12130_0/ghc12130_0.s
 *** Deleting temp dirs:
 Deleting: /tmp/ghc12130_0
 }}}

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/1455
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #1454: Confusing or bug: :t and :i don't return same type

2007-06-22 Thread GHC
#1454: Confusing or bug: :t and :i don't return same type
--+-
Reporter:  [EMAIL PROTECTED]  |Owner: 
Type:  bug|   Status:  new
Priority:  normal |Milestone: 
   Component:  GHCi   |  Version:  6.6.1  
Severity:  normal |   Resolution: 
Keywords: |   Difficulty:  Unknown
  Os:  Unknown| Testcase: 
Architecture:  Unknown|  
--+-
Comment (by simonpj):

 Please give a concrete example!

 Simon

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/1454
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #1431: libraries: configure argument containing whitespace not passed correctly to Setup

2007-06-22 Thread GHC
#1431: libraries: configure argument containing whitespace not passed correctly 
to
Setup
-+--
Reporter:  greenrd   |Owner:  igloo  
Type:  bug   |   Status:  closed 
Priority:  normal|Milestone: 
   Component:  Build System  |  Version:  6.7
Severity:  major |   Resolution:  fixed  
Keywords:|   Difficulty:  Unknown
  Os:  Multiple  | Testcase: 
Architecture:  Multiple  |  
-+--
Changes (by igloo):

  * resolution:  = fixed
  * status:  new = closed

Comment:

 Fixed by
 {{{
 Fri Jun 22 17:09:51 BST 2007  Ian Lynagh [EMAIL PROTECTED]
   * Change how the libraries Makefile adds --configure-option= flags;
 fixes #1431
   We now assume that each configure option is quoted with '', and thus
   replace  ' with  --configure-option='.
 }}}

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/1431
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #1437: Build failures on Mac OS X 10.5

2007-06-22 Thread GHC
#1437: Build failures on Mac OS X 10.5
--+-
Reporter:  [EMAIL PROTECTED]  |Owner: 
Type:  bug|   Status:  new
Priority:  normal |Milestone:  6.8
   Component:  Compiler   |  Version:  6.6.1  
Severity:  normal |   Resolution: 
Keywords: |   Difficulty:  Unknown
  Os:  MacOS X| Testcase: 
Architecture:  x86|  
--+-
Comment (by [EMAIL PROTECTED]):

 On the latest Leopard build, the splitter incompatibility is still gone,
 but the crash is still there.

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/1437
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #1454: Confusing or bug: :t and :i don't return same type

2007-06-22 Thread GHC
#1454: Confusing or bug: :t and :i don't return same type
--+-
Reporter:  [EMAIL PROTECTED]  |Owner: 
Type:  bug|   Status:  new
Priority:  normal |Milestone: 
   Component:  GHCi   |  Version:  6.6.1  
Severity:  normal |   Resolution: 
Keywords: |   Difficulty:  Unknown
  Os:  Unknown| Testcase: 
Architecture:  Unknown|  
--+-
Comment (by Isaac Dupree):

 Playing around in ghci:
 {{{
 let f a b = b^a
 :t f
 f :: (Num a, Integral b) = b - a - a
 :i f
 f :: (Integral b, Num a) = b - a - a
 }}}
 which mean the same thing but are, oddly, different... is this an example,
 or related to one?

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/1454
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #1455: Wildcard pattern _ generates defaulting warning

2007-06-22 Thread GHC
#1455: Wildcard pattern _ generates defaulting warning
+---
Reporter:  guest|Owner: 
Type:  bug  |   Status:  new
Priority:  low  |Milestone: 
   Component:  Compiler |  Version:  6.6
Severity:  trivial  |   Resolution: 
Keywords:  wildcard defaulting warning  |   Difficulty:  Unknown
  Os:  MacOS X  | Testcase: 
Architecture:  x86  |  
+---
Comment (by Isaac Dupree):

 Unfortunately, the type-class could be used in some way that determines
 other parts of the result. For example
 {{{
 properFractionNOT :: (RealFrac a, Integral b) = a - (b, a)
 properFractionNOT a = (b', a')
   where
 a' = a + (fromIntegral (5000 `asTypeOf` b'))
 b' = 2
 }}}
 produces different answers depending on the type of b:
 {{{
  snd $ (properFractionNOT:: Rational - (Integer,Rational)) 3
 5003%1
  snd $ (properFractionNOT:: Rational - (Int,Rational)) 3
 937459715%1
 }}}
 So it is, at least, not _easy_ to detect situations where the class-
 constraint is used only to determine the very same result...

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/1455
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #1434: Slow conversion from Double to Int

2007-06-22 Thread GHC
#1434: Slow conversion from Double to Int
--+-
Reporter:  [EMAIL PROTECTED]  |Owner: 
Type:  bug|   Status:  new
Priority:  normal |Milestone: 
   Component:  libraries/base |  Version:  6.4.1  
Severity:  normal |   Resolution: 
Keywords: |   Difficulty:  Unknown
  Os:  Linux  | Testcase: 
Architecture:  Unknown|  
--+-
Comment (by [EMAIL PROTECTED]):

 The following example is still certainly not the most compact one.
 The above examples let me assume
 that the difference between {{{round}}} and {{{double2Int}}}
 is eliminated by the optimizer.
 However, when I use {{{Int16}}} instead of {{{Int}}},
 this optimization seems to fail, again.
 {{{
 {-# OPTIONS -O2 #-}
 {-  -package fps -package binary -}
 module Main (main) where

 import System.Time (getClockTime, diffClockTimes, tdSec, tdPicosec)
 import System.Directory (removeFile)

 import qualified Data.ByteString.Lazy as B
 import qualified Data.Binary.Put as Bin

 import Foreign (Int16)

 import GHC.Float (double2Int)



 type Sample = Int16-- 'truncate' is slow
 -- type Sample = Int -- 'truncate' is as fast as double2Int


 writeSignalMonoBinaryPut ::
FilePath - [Sample] - IO ()
 writeSignalMonoBinaryPut fileName =
B.writeFile fileName .
Bin.runPut .
mapM_ (Bin.putWord16host . fromIntegral)



 numToSample :: Double - Sample
 numToSample x =
-- fromIntegral $ (truncate x :: Int)
truncate x

 doubleToSample :: Double - Sample
 doubleToSample x =
fromIntegral $ double2Int x



 {- * driver -}

 measureTime :: String - IO () - IO ()
 measureTime name act =
do putStr (name++: )
   timeA - getClockTime
   act
   timeB - getClockTime
   let td = diffClockTimes timeB timeA
   print (fromIntegral (tdSec td) +
  fromInteger (tdPicosec td) * 1e-12 :: Double)




 exponential2 :: Double - Double - [Double]
 exponential2 k = iterate (k*)


 sawSignalDouble :: Double - [Double]
 sawSignalDouble halfLife =
take 20 (exponential2 halfLife 32767)

 sawSignal16Trunc :: [Sample]
 sawSignal16Trunc =
map numToSample (sawSignalDouble 0.999)

 sawSignal16LowLevel :: [Sample]
 sawSignal16LowLevel =
map doubleToSample (sawSignalDouble 0.999001)


 tests :: [(String, FilePath, FilePath - IO ())]
 tests =
(saw double2Int, saw-double2Int.sw, flip writeSignalMonoBinaryPut
 sawSignal16LowLevel) :
(saw round,  saw-round.sw,  flip writeSignalMonoBinaryPut
 sawSignal16Trunc) :
[]


 main :: IO ()
 main =
do mapM (\(label, fileName, action) -
   measureTime label (action fileName))
tests

   mapM_ (\(_,fileName,_) - removeFile fileName)
tests
 }}}

 For {{{type Sample = Int16}}} I get
 {{{
 $ ghc-6.6.1 -package binary RoundTest.hs  a.out
 saw double2Int: 8.7173e-2
 saw round: 0.430843
 }}}
 For {{{type Sample = Int}}} I get
 {{{
 $ ghc-6.6.1 -package binary RoundTest.hs  a.out
 saw double2Int: 8.8279e-2
 saw round: 9.8028e-2
 }}}

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/1434
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #1454: Confusing or bug: :t and :i don't return same type

2007-06-22 Thread GHC
#1454: Confusing or bug: :t and :i don't return same type
--+-
Reporter:  [EMAIL PROTECTED]  |Owner: 
Type:  bug|   Status:  new
Priority:  normal |Milestone: 
   Component:  GHCi   |  Version:  6.6.1  
Severity:  normal |   Resolution: 
Keywords: |   Difficulty:  Unknown
  Os:  Unknown| Testcase: 
Architecture:  Unknown|  
--+-
Comment (by [EMAIL PROTECTED]):

 In the particular case I am referring to the two types were of the form:
 (Context cxt) = more types - sometype containing a type variable -
 more types this was for :t

 , but :i derived a type that was of the form:
 (Context cxt) = more types - Concretetype - more types

 So, your case is not an example of this. I hadn't seen the bug earlier, so
 it might only come up in really hairy code.

-- 
Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/1454
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: 64-bit windows version?

2007-06-22 Thread Simon Marlow

Peter Tanski wrote:

Maybe this depends on the type of convenience you want to offer 
GHC-developers.  With the autoconf system they are required (for 
Windows) to download and install: Mingw, perl, python (for the 
testsuite), flex, happy, alex and some others I can't remember right 
now.  Oh yeah, GMP.


In fact, to build a source distribution on Windows, there are only 3 
dependencies:  GHC, Mingw and (either MSYS or Cygwin).


To build from darcs, you also need: darcs, Happy, and Alex.  To build docs, you 
also need Haddock.  To run the testsuite you need Python.


 When I did that, autoconf gave me the convenience 
of having those programs somewhere in my $PATH and if autoconf found 
them I was good to go.  The way I would change things would not be so 
different, except that the developer would have to set up the build 
environment: run from under Mingw, etc.  The benefit would be that 
instead of testing all the extra things autoconf tests for--path 
separator, shell variables, big/little endian, the type of ld and the 
commands to execute compile--those would be hard-wired because the 
system is known.  When things break down you don't have to search 
through the long lines of output because you know what the initial 
settings are and can even rely on them to help debug Make.

 This is the
way it is done for several systems: NextStep (now Apple) project 
makefiles, Jam, and many of the recent build systems, including CMake, 
Scons and WAF.


Ok, you clearly have looked at a lot more build systems than I have.  So you 
think there's a shift from autoconf-style figure out the configuration by 
running tests to having a database of configuration settings for various 
platforms?  I'm surprised - I thought conventional wisdom was that you should 
write your build system to be as independent as possible from the name of the 
build platform, so that the system is less sensitive to changes in its 
environment, and easier to port.  I can see how wiring-in the parameters can 
make the system more concrete, transparent and predictable, and perhaps that 
makes it easier to manage.  It's hard to predict whether this would improve our 
situation without actually doing it, though - it all comes down to the details.


On the other hand, we do hard-wire a lot of knowledge about Windows rather than 
autoconfing it.  This works because Windows is a fixed point; in contrast every 
Linux system is different in various ways.  So I guess I don't find it 
problematic to wire-in what happens on Windows, but I do try to avoid it where 
possible.



Getting back to the Windows native port, I'm pretty sure you're making more of a 
meal of this than necessary.  There's no need to port via HC files, unless I'm 
missing something.


 Whatever the end result is, GHC must be able to operate without Mingw
 and the GNU toolset.

That's the whole point of doing the port!

However, what I'm saying is that we can continue to use Cygwin as a set of tools 
for doing the build.  I don't see any problems with this (except that Cygwin is 
slow and clunky), and it keeps the changes to the current system to a minimum, 
and means we can continue to share the build system with Posixy systems.  Here's 
the plan I had in mind:


 1. modify GHC so that:
a) it can invoke CL instead of gcc to compile C files
b) its native code generator can be used to create native .obj files,
   I think you kept the syntax the same and used YASM, the other
   alternative is to generate Intel/MS syntax and use MASM.
c) it can link a binary using the MS linker
 2. modify Cabal so that it can use this GHC, and MS tools
 3. modify the build system where necessary to know about .obj .lib etc.
 4. modify the core packages to use Win32 calls only (no mingw)
 5. Use the stage 1 GHC to compile the RTS and libraries
 6. Build a stage 2 compiler: it will be a native binary
 7. Build a binary distribution

Regarding autoconf, for the time being, just supply ready-made output files 
(mk/config.h, libraries/base/include/HsBaseConfig.h, etc.).


Cheers,
Simon

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 64-bit windows version?

2007-06-22 Thread skaller
On Fri, 2007-06-22 at 12:03 +0100, Simon Marlow wrote:

 
 Ok, you clearly have looked at a lot more build systems than I have.  So you 
 think there's a shift from autoconf-style figure out the configuration by 
 running tests to having a database of configuration settings for various 
 platforms?  I'm surprised - I thought conventional wisdom was that you should 
 write your build system to be as independent as possible from the name of the 
 build platform, so that the system is less sensitive to changes in its 
 environment, and easier to port.  I can see how wiring-in the parameters can 
 make the system more concrete, transparent and predictable, and perhaps that 
 makes it easier to manage.  It's hard to predict whether this would improve 
 our 
 situation without actually doing it, though - it all comes down to the 
 details.

This misses the point. The 'suck it and see' idea fails totally for
cross-compilation. It's a special case.

The right way to do things is to separate the steps:

(a) make a configuration
(b) select a configuration

logically. This is particularly important for developers who are using
the same code base to build for multiple 'platforms' on the 
same machine.

With the above design you can have your cake and eat it too .. :)

Thats the easy part.. the HARD part is: every 'system' comes with
optional 'add-on' facilities. These add-ons may need configuration
data. Often, you want to add the 'add-on' after the system is built.
So integrating the configuration data is an issue.

Felix build system allows add-on packages to have their own
configuration model. It happens to be executed on the fly,
and typically does your usual suck it and see testing
(eg .. where are the SDL headers? Hmm ..) 

This design is wrong of course ;(

-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 64-bit windows version?

2007-06-22 Thread Simon Marlow

skaller wrote:

On Fri, 2007-06-22 at 12:03 +0100, Simon Marlow wrote:

Ok, you clearly have looked at a lot more build systems than I have.  So you 
think there's a shift from autoconf-style figure out the configuration by 
running tests to having a database of configuration settings for various 
platforms?  I'm surprised - I thought conventional wisdom was that you should 
write your build system to be as independent as possible from the name of the 
build platform, so that the system is less sensitive to changes in its 
environment, and easier to port.  I can see how wiring-in the parameters can 
make the system more concrete, transparent and predictable, and perhaps that 
makes it easier to manage.  It's hard to predict whether this would improve our 
situation without actually doing it, though - it all comes down to the details.


This misses the point. The 'suck it and see' idea fails totally for
cross-compilation. It's a special case.

The right way to do things is to separate the steps:

(a) make a configuration
(b) select a configuration

logically.


Hmm, I don't see how the approach fails totally for cross-compilation.  You 
simply have to create the configuration on the target machine, which is exactly 
what we do when we cross-compile GHC.  Admittedly the process is a bit ad-hoc, 
but it works.


Cheers,
Simon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 64-bit windows version?

2007-06-22 Thread Peter Tanski


On Jun 22, 2007, at 9:45 AM, Simon Marlow wrote:


skaller wrote:

On Fri, 2007-06-22 at 12:03 +0100, Simon Marlow wrote:
Ok, you clearly have looked at a lot more build systems than I  
have.  So you think there's a shift from autoconf-style figure  
out the configuration by running tests to having a database of  
configuration settings for various platforms?


I shouldn't overstate the situation: the other complete build  
systems, CMake and SCons do have autoconf capabilities in the way of  
finding headers and programs and checking test-compiles, the basic  
sanity checks--CMake has many more autoconf-like checks than SCons.   
Where they differ from the automake system seems to be their setup,  
which like Make has hard-coded settings for compilers, linkers, etc.   
(Some standard cmake settings are wrong for certain targets.)  I  
don't know if you have any interest in pursuing or evaluating CMake  
(certainly not now) but the standard setup is stored in a standard  
directory on each platform, say, /usr/share/cmake-2.4/Modules/ 
Platform/$(platform).cmake and may be overridden by your own cmake  
file in, say, $(srcdir)/cmake/UserOverride.cmake.


The preset-target-configuration build model I was referring to is a  
scaled-down version of the commercial practice which allows you to  
have a single system and simultaneously compile for many different  
architecture-platform combinations--once you have tested each and  
know how everything works.  For the initial exploration, it is a  
different (more anal) strategy: before invading, get all the  
intelligence you can and prepare thoroughly.  The GNU-Autoconf  
strategy is to keep a few troops who have already invaded many other  
places, adjust their all-purpose equipment a little for the mission  
and let them have at it.  My gripe is that their equipment isn't very  
good.


Cheers,
Pete


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 64-bit windows version?

2007-06-22 Thread skaller
On Fri, 2007-06-22 at 14:45 +0100, Simon Marlow wrote:
 skaller wrote:

  This misses the point. The 'suck it and see' idea fails totally for
  cross-compilation. It's a special case.
  
  The right way to do things is to separate the steps:
  
  (a) make a configuration
  (b) select a configuration
  
  logically.
 
 Hmm, I don't see how the approach fails totally for cross-compilation.  You 
 simply have to create the configuration on the target machine, which is 
 exactly 
 what we do when we cross-compile GHC.  Admittedly the process is a bit 
 ad-hoc, 
 but it works.

But that consists of:

(a) make a configuration (on the target machine)
(b) select that configuration (on the host machine)

which is actually the model I suggest. To be more precise, the
idea is a 'database' of configurations, and building by selecting
one from that database as the parameter to the build process.

The database would perhaps consists of 

(a) definitions for common architectures
(b) personalised definitions

You would need a tool to copy and edit existing definitions
(eg .. a text editor) and a tool to 'autogenerate' prototype
definitions (autoconf for example).

What I meant failed utterly was simply building the sole
configuration by inspection of the properties of a the
host machine (the one you will actually build on).

That does work if

(a) the auto-dectect build scripts are smart and
(b) the host and target machines are the same

BTW: Felix has a 4 platform build model:

* build
* host
* target
* run

The build machine is the one you build on. Example: Debian
autobuilder.

The host is the one you intend to translate Felix code to
C++ code on, typically your workstation. In Windows environment
this might be Cygwin.

The target is the one you actually compile the C++ code on.
In Windows environment, this might be WIN32 native (MSVC++).

The run machine is where you actually execute the code.

The 'extra' step here is because it is a two stage compiler.
Some code has to be built twice: for example the GLR parser
elkhound executable runs on the host machine to generate
C++ and it uses a library. The same library is required
at run time, but has to be recompiled for the target.
EG: Elkhound built on cygwin to translate grammar to C++,
and Elkhound on MSVC++ for the run time automaton.

I'm not sure GHC as such need cross-cross compilation model,
but bootstrapping a cross compiler version almost certainly does.


-- 
John Skaller skaller at users dot sf dot net
Felix, successor to C++: http://felix.sf.net
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 64-bit windows version?

2007-06-22 Thread Peter Tanski

On Jun 22, 2007, at 7:03 AM, Simon Marlow wrote:
In fact, to build a source distribution on Windows, there are only  
3 dependencies:  GHC, Mingw and (either MSYS or Cygwin).


To build from darcs, you also need: darcs, Happy, and Alex.  To  
build docs, you also need Haddock.  To run the testsuite you need  
Python.


True, Mingw does come standard with perl and a version of flex.   
There are Windows-native versions of Perl and flex available (i.e.,  
ActivePerl).  Now you are familiar with Mingw.  Imagine being a  
standard Windows programmer, trying to choose which version of Mingw  
to download--some are minimal installations--and going over the build  
requirements: perl, flex, happy, alex and, haddock are listed.  That  
is quite a bit of preparation.  There are minimal-effort ways to go  
about this (I will look into updating the wiki.)


 Whatever the end result is, GHC must be able to operate without  
Mingw

 and the GNU toolset.

That's the whole point of doing the port!


For running GHC--how about being able to build a new version of GHC  
from source?



 1. modify GHC so that:
a) it can invoke CL instead of gcc to compile C files


Mostly done (not completely tested).

b) its native code generator can be used to create native .obj  
files,

   I think you kept the syntax the same and used YASM, the other
   alternative is to generate Intel/MS syntax and use MASM.


This is as easy as simply using Yasm--also mostly done (not  
completely tested).  By the way, by testing I mean doing more than  
a simple -optc... -optc... -optl... addition to the command line,  
although an initial build using a current mingw version of GHC may  
certainly do this.



c) it can link a binary using the MS linker
 2. modify Cabal so that it can use this GHC, and MS tools
 3. modify the build system where necessary to know about .obj .lib  
etc.


A bit invasive (it involves modifying the make rules so they take an  
object-suffix variable).  Instead of the current suffix.mk:


$(odir_)%.$(way_)o : %.hc

it should be:

$(odir_)%.$(way_)$(obj_sfx) : %.hc

or some such.  This may affect other builds, especially if for some  
reason autoconf can't determine the object-suffix for a platform,  
which is one reason I suggested a platform-specific settings file.  I  
could handle this by having autoconf set the target variable, put all  
the windows-specific settings in a settings.mk file (including a  
suffix.mk copy) and have make include that file.



 4. modify the core packages to use Win32 calls only (no mingw)


That is where a lot of preparation is going.  This is *much* harder  
to do from mingw than from VS tools since you have to set up all the  
paths manually.



 5. Use the stage 1 GHC to compile the RTS and libraries
 6. Build a stage 2 compiler: it will be a native binary
 7. Build a binary distribution


I told Torkil I would have a version of the replacement library  
available for him as soon as possible.  I'll shut up now.  It looks  
like a long weekend.


Cheers,
Pete

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 64-bit windows version?

2007-06-22 Thread Simon Marlow

Peter Tanski wrote:

A bit invasive (it involves modifying the make rules so they take an 
object-suffix variable).  Instead of the current suffix.mk:


$(odir_)%.$(way_)o : %.hc

it should be:

$(odir_)%.$(way_)$(obj_sfx) : %.hc

or some such.  This may affect other builds, especially if for some 
reason autoconf can't determine the object-suffix for a platform, which 
is one reason I suggested a platform-specific settings file.  I could 
handle this by having autoconf set the target variable, put all the 
windows-specific settings in a settings.mk file (including a suffix.mk 
copy) and have make include that file.


Surely this isn't hard?

ifeq $(TargetOS) windows
osuf=obj
else
osuf=o
endif

and then use $(osuf) wherever necessary.


 4. modify the core packages to use Win32 calls only (no mingw)


That is where a lot of preparation is going.  This is *much* harder to 
do from mingw than from VS tools since you have to set up all the paths 
manually.


I don't understand the last sentence - what paths?  Perhaps I wasn't clear here: 
I'm talking about the foreign calls made by the base package and the other core 
packages; we can't call any functions provided by the mingw C runtime, we can 
only call Win32 functions.  Similarly for the RTS.  I have no idea how much 
needs to change here, but I hope not much.



 5. Use the stage 1 GHC to compile the RTS and libraries
 6. Build a stage 2 compiler: it will be a native binary
 7. Build a binary distribution


I told Torkil I would have a version of the replacement library 
available for him as soon as possible.  I'll shut up now.  It looks like 
a long weekend.


:-)

Cheers,
Simon

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 64-bit windows version?

2007-06-22 Thread Peter Tanski


On Jun 22, 2007, at 11:42 AM, Simon Marlow wrote:


Peter Tanski wrote:
A bit invasive (it involves modifying the make rules so they take  
an object-suffix variable).  Instead of the current suffix.mk:

$(odir_)%.$(way_)o : %.hc
it should be:
$(odir_)%.$(way_)$(obj_sfx) : %.hc
or some such.  This may affect other builds, especially if for  
some reason autoconf can't determine the object-suffix for a  
platform, which is one reason I suggested a platform-specific  
settings file.  I could handle this by having autoconf set the  
target variable, put all the windows-specific settings in a  
settings.mk file (including a suffix.mk copy) and have make  
include that file.


Surely this isn't hard?

ifeq $(TargetOS) windows
osuf=obj
else
osuf=o
endif

and then use $(osuf) wherever necessary.


Yes it is easy but now all Makefiles must be changed to use $(osuf),  
such as this line in rts/Makefile:


378: %.$(way_)o : %.cmm $(H_FILES),

for what will be a (hopefully) temporary Windows build.


 4. modify the core packages to use Win32 calls only (no mingw)
That is where a lot of preparation is going.  This is *much*  
harder to do from mingw than from VS tools since you have to set  
up all the paths manually.


I don't understand the last sentence - what paths?  Perhaps I  
wasn't clear here: I'm talking about the foreign calls made by the  
base package and the other core packages; we can't call any  
functions provided by the mingw C runtime, we can only call Win32  
functions.  Similarly for the RTS.  I have no idea how much needs  
to change here, but I hope not much.


To use the MS tools with the standard C libraries and include  
directories, I must either gather the environment variables  
separately and pass them to cl/link on the command line or I must  
manually add them to my system environment (i.e., modify msys.bat, or  
the windows environment) so msys will use them in its environment.


The other problem is the old no-pathnames-with-spaces in Make, since  
that must be made to quote all those environment variables when  
passing them to cl.  I could use the Make-trick of filling the spaces  
with a character and removing that just before quoting but that is a  
real hack and not very reliable--it breaks $(word ...).


Altogether it is a pain to get going and barely reproducible.  That  
is why I suggested simply producing .hc files and building from .hc  
using VS.


Cheers,
Pete 
___

Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 64-bit windows version?

2007-06-22 Thread Matthew Danish
On Thu, Jun 21, 2007 at 12:35:15AM +1000, skaller wrote:
 Don't forget .. Mingw has to be installed too .. and in fact
 that is much harder. I tried to install MSYS and gave up.

You're kidding right?  There's Windows installer .exes for MinGW and
MSYS.  You download it, run it, and click Next a few times.

-- 
-- Matthew Danish -- user: mrd domain: cmu.edu
-- OpenPGP public key: C24B6010 on keyring.debian.org
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


[Haskell] ANNOUNCE: Haskell Program Coverage 0.4

2007-06-22 Thread Andy Gill
I am happy to announce release 0.4 of Hpc, a tool for Haskell  
developers.


Hpc is a tool-kit to record and display Haskell Program Coverage. Hpc
includes tools that instrument Haskell programs to record program
coverage, run instrumented programs, and display the coverage  
information

obtained.

This version includes many changes since 0.2, including a small
DSL for expressing coverage holes.

More details can be found on the Hpc web-page.

  http://projects.unsafePerformIO.com/hpc

Feedback and feature requests are encouraged.

Andy Gill

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] Class type constraining?

2007-06-22 Thread Hugo Pacheco

Hi,
I have a set of functor definitions with corresponding instances

newtype Id a = Ident {unIdent :: a}   deriving Eq
newtype K b a = Const {unConst :: b}   deriving Eq


instance Functor Id where
  fmap f (Ident x) = Ident $ f x
instance Functor (K a) where
  fmap f (Const x) = Const x
(...)

And a class that creates a representation b for a functor f a:

class Functor f = C f a b | f a - b where
  ftest :: f a - b

instance C Id a a
instance C (K b) a b

I want to write some function

test :: (C f a b) = (a - b)
test = ftest . undefined

But, as expected, the type checker claims not to satisfy the context, since
the function may be valid for any representable functor, as long as it has
an instance.
What I would want is to constrain the set of functors that the class applies
to, for which I have all the required instances. Is it possible in Haskell?

Sorry if this explanation is confusing.
Thanks in advance,
hugo
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Class type constraining?

2007-06-22 Thread Hugo Pacheco

That doesn't help in any sense, my problem is that I need to use the class
method, but I take the context from granted, but the compiler still needs a
general instance for any functor f.
hugo
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell] Class type constraining?

2007-06-22 Thread Benja Fallenstein

2007/6/22, Hugo Pacheco [EMAIL PROTECTED]:

class Functor f = C f a b | f a - b where
   ftest :: f a - b

I want to write some function

test :: (C f a b) = (a - b)
test = ftest . undefined


I'm not sure whether this is what you want, but the obvious way to
make this type-check would seem to be to add a functional dependency
and a type signature for 'undefined,' like this:


class Functor f = C f a b | f a - b, a b - f where
   ftest :: f a - b

test :: (C f a b) = (a - b)
test = ftest . (undefined :: a - f a)


- Benja
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] Filesystem access

2007-06-22 Thread Bulat Ziganshin
Hello Andrew,

Friday, June 22, 2007, 12:19:51 AM, you wrote:
 1. Is there *any* way to determine how large a file is *without* opening
 it? The only library function I can find to do with file sizes is 
 hFileSize; obviously this only works for files that you have permission
 to open!

std library doesn't contain such function, although it is easily
modeled after getModificationTime. note that on windows this will
return only lower 32 bits of file size due to using lstat() internally

another way around this is to look at getFileAttributes implementation
and build the same wrapper around win32 getfilesize function

-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re[4]: [Haskell-cafe] haskell crypto is reaaaaaaaaaally slow

2007-06-22 Thread Bulat Ziganshin
Hello Duncan,

Thursday, June 21, 2007, 8:48:53 AM, you wrote:

  The smallest possible would be 2 words overhead by just using a
  ByteArray#,
 
 i tried it once and found that ByteArray# size is returned rounded to 4 -
 there is no way in GHC runtime to alloc, say, exactly 37 bytes. and
 don't forget to add 2 unused bytes at average

 Right, GHC heap object are always aligned to the natural alignment of
 the architecture, be that 4 or 8 bytes.

 Try the same experiment with C's malloc. I'd be very surprised if you
 can allocate 37 bytes and not end up using 40 (plus some extra for
 remembering the allocation length).

that i'm trying to say is that one need to store exact string size
because value returned by getSizeOfByteArray is aligned to 4

-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Collections

2007-06-22 Thread Lennart Augustsson

It's not that broken,  It was designed by people from the FP community. :)

On 6/21/07, Andrew Coppin [EMAIL PROTECTED] wrote:


Brent Yorgey wrote:

 OK, I don't even understand that syntax. Have they changed the Java
 language spec or something?


 Yes.  As of version 5 (or 1.5, or whatever you want to call it), Java
 has parametric polymorphism.  Do a Google search for Java generics.

OMG - they actually added a language feature to Java... o_O

I bet it's broken though! ;-)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Type classes vs C++ overloading Re: [Haskell-cafe] Messing around with types [newbie]

2007-06-22 Thread Cristiano Paris

I sent this message yesterday to Bulat but it was intended for the haskel
cafe, so I'm resending it here today.

Thank to everyone who answered me privately. Today I'll keep on
experimenting and read the reference you gave me.

Cristiano

-- Forwarded message --
From: Cristiano Paris [EMAIL PROTECTED]
Date: Jun 21, 2007 6:20 PM
Subject: Re: Type classes vs C++ overloading Re: [Haskell-cafe] Messing
around with types [newbie]
To: Bulat Ziganshin [EMAIL PROTECTED]


On 6/21/07, Bulat Ziganshin [EMAIL PROTECTED] wrote:


Hello Cristiano,

Thursday, June 21, 2007, 4:46:27 PM, you wrote:

 class FooOp a b where
 foo :: a - b - IO ()

 instance FooOp Int Double where
 foo x y = putStrLn $ (show x) ++  Double  ++ (show y)

this is rather typical question :)



I knew it was... :D

unlike C++ which resolves any

overloading at COMPILE TIME, selecting among CURRENTLY available
overloaded definitions and complaining only when when this overloading
is ambiguous, type classes are the RUN-TIME overloading mechanism

your definition of partialFoo compiled into code which may be used
with any instance of foo, not only defined in this module. so, it
cannot rely on that first argument of foo is always Int because you may
define other instance of FooOp in other module. 10 is really
constant function of type:

10 :: (Num t) =  t

i.e. this function should receive dictionary of class Num in order to
return value of type t (this dictionary contains fromInteger::Integer-t
method which used to convert Integer representation of 10 into type
actually required at this place)

this means that partialFoo should have a method to deduce type of 10
in order to pass it into foo call. Let's consider its type:

partialFoo :: (FooOp t y) =  y - IO ()

when partialFoo is called with *any* argument, there is no way to
deduce type of t from type of y which means that GHC has no way to
determine which type 10 in your example should have. for example, if
you will define

instance FooOp Int32 Double where

anywhere, then call partialFoo (5.0::Double) will become ambiguous

shortly speaking, overloading resolved based on global class
properties, not on the few instances present in current module. OTOH,
you build POLYMORPHIC functions this way while C++ just selects
best-suited variant of overloaded function and hard-code its call

further reading:
http://homepages.inf.ed.ac.uk/wadler/papers/class/class.ps.gz
http://haskell.org/haskellwiki/OOP_vs_type_classes
chapter 7 of GHC user's guide, functional dependencies



M... your point is hard to understand for me.

In his message, I can understand Bryan Burgers' point better (thanks Bryan)
and I think it's somewhat right even if I don't fully understand the type
machinery occuring during ghc compilation (yet).

Quoting Bryan:

*From this you can see that 10 is not necessarily an Int, and 5.0 is
*not necessarily a Double. So the typechecker does not know, given just
10 and 5.0, which instance of 'foo' to use. But when you explicitly
told the typechecker that 10 is an Int and 5.0 is a Double, then the
type checker was able to choose which instance of 'foo' it should use.

So, let's see if I've understood how ghc works:

1 - It sees 5.0, which belongs to the Fractional class, and so for 10
belonging to the Num class.
2 - It only does have a (FooOp x y) instance of foo where x = Int and y =
Double but it can't tell whether 5.0 and 10.0 would fit in the Int and
Double types (there's some some of uncertainty here).
3 - Thus, ghci complains.

So far so good. Now consider the following snippet:

module Main where

foo :: Double - Double
foo = (+2.0)

bar = foo 5.0

I specified intentionally the type signature of foo. Using the same argument
as above, ghci should get stuck in evaluating foo 5.0 as it may not be a
Double, but only a Fractional. Surprisingly (at least to me) it works!

So, it seems as if the type of 5.0 was induced by the type system to be
Double as foo accepts only Double's.

If I understand well, there's some sort of asymmetry when typechecking a
function application (the case of foo 5.0), where the type signature of a
function is dominant, and where typechecking an overloaded function
application (the original case) since there type inference can't take place
as someone could add a new overloading later as Bulat says.

So, I tried to fix my code and I came up with this (partial) solution:

module Main where

class FooOp a b where
 foo :: a - b - IO ()

instance (Num t) = FooOp t Double where
 foo x y = putStrLn $ (show x) ++  Double  ++ (show y)

partialFoo :: Double - IO ()
partialFoo = foo 10

bar = partialFoo 5.0

As you can see, I specified that partialFoo does accept Double so the type
of 5.0 if induced to be Double by that type signature and the ambiguity
disappear (along with relaxing the type of a to be simply a member of the
Num class so 10 can fit in anyway).

Problems arise if I add another instance of FooOp where b is Int (i.e. FooOp
Int Int):

module 

Re: Type classes vs C++ overloading Re: [Haskell-cafe] Messing around with types [newbie]

2007-06-22 Thread Tomasz Zielonka
On Fri, Jun 22, 2007 at 10:57:58AM +0200, Cristiano Paris wrote:
 Quoting Bryan:
 
 *From this you can see that 10 is not necessarily an Int, and 5.0 is
 *not necessarily a Double. So the typechecker does not know, given just
 10 and 5.0, which instance of 'foo' to use. But when you explicitly
 told the typechecker that 10 is an Int and 5.0 is a Double, then the
 type checker was able to choose which instance of 'foo' it should use.

I would stress typechecker does not know, given just 10 and 5.0, which
instance of 'foo' to use. The statement 10 is not necessarily an Int
may be misleading. I would rather say 10 can be not only Int, but also
any other type in the Num type class.

 So, let's see if I've understood how ghc works:
 
 1 - It sees 5.0, which belongs to the Fractional class, and so for 10
 belonging to the Num class.
 2 - It only does have a (FooOp x y) instance of foo where x = Int and y =
 Double but it can't tell whether 5.0 and 10.0 would fit in the Int and
 Double types (there's some some of uncertainty here).

The problem is not that it can't tell whether 5.0 and 10 would fit Int
and Double (actually, they do fit), it's that it can't tell if they
won't fit another instance of FooOp.

 3 - Thus, ghci complains.
 
 So far so good. Now consider the following snippet:
 
 module Main where
 
 foo :: Double - Double
 foo = (+2.0)
 
 bar = foo 5.0
 
 I specified intentionally the type signature of foo. Using the same argument
 as above, ghci should get stuck in evaluating foo 5.0 as it may not be a
 Double, but only a Fractional. Surprisingly (at least to me) it works!

See above.

 So, it seems as if the type of 5.0 was induced by the type system to be
 Double as foo accepts only Double's.

I think that's correct.

 If I understand well, there's some sort of asymmetry when typechecking a
 function application (the case of foo 5.0), where the type signature of a
 function is dominant, and where typechecking an overloaded function
 application (the original case) since there type inference can't take place
 as someone could add a new overloading later as Bulat says.

There is no asymmetry. The key word here is *ambiguity*. In the
(Double - Double) example there is no ambiguity - foo is not
overloaded, in other words it's a single function, so it suffices
to check if the parameters have the right types.

In your earlier example, both 5.0 and foo are overloaded. If you had
more instances for FooOp, the ambiguity could be resolved in many ways,
possibly giving different behaviour. Haskell doesn't try to be smart
and waits for you to decide. And it pretends it doesn't see that there
is only one instance, because taking advantage of this situation could
give surprising results later.

 but it didn't work. Here's ghci's complaint:
 
 example.hs:7:0:
Duplicate instance declarations:
  instance (Num t1, Fractional t2) = FooOp t1 t2
-- Defined at example.hs:7:0
  instance (Num t1, Num t2) = FooOp t1 t2
-- Defined at example.hs:10:0
 Failed, modules loaded: none.

Instances are duplicate if they have the same (or overlapping) instance
heads. An instance head is the thing after =. What's before = doesn't
count.

 It seems that Num and Fractional are somewhat related. Any hint?

It's not important here, but indeed they are:
class (Num a) = Fractional a where

Best regards
Tomek
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Graphical Haskell

2007-06-22 Thread Pasqualino 'Titto' Assini
This might be of interest:

http://pipes.yahoo.com/pipes/

Best,

titto

On Friday 22 June 2007 11:15:49 peterv wrote:
 Hi,

 Since nobody gave an answer on this topic, I guess it is insane to do it in
 Haskell (at least for a newbie)? :)

 Thanks for any info,
 Peter

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of peterv
 Sent: Wednesday, June 20, 2007 21:48
 To: haskell-cafe@haskell.org
 Subject: [Haskell-cafe] Graphical Haskell

 In the book Haskell School of Expression, streams are nicely explained
 using a graphical flow graph.

 This is also done more or less in
 http://research.microsoft.com/~simonpj/papers/marktoberdorf/Marktoberdorf.p
p t to explain monads and other concepts.

 I would like to create a program that allows you to create such flow
 graphs, and then let GHC generate the code and do type inference.

 I found a paper where Haskell is used to create a GUI application with
 undo/redo etc for creating graphical Basian networks
 (http://www.cs.uu.nl/dazzle/f08-schrage.pdf), so this gave me confidence
 that I could it do all in Haskell.

 Now, instead of generating Haskell code (which I could do first, would be
 easier to debug), I would like to directly create an AST, and use an
 Haskell API to communicate with GHC.

 I already found out that GHC indeed has such an API, but how possible is
 this idea? Has this been done before? I only found a very old attempt at
 this, confusingly also called Visual Haskell, see
 http://ptolemy.eecs.berkeley.edu/%7Ejohnr/papers/visual.html, but I can't
 find any source code for that project.

 I did a similar project in C# that generated C++ code, so I've done it
 before, just not in Haskell.

 Thanks a lot,
 Peter


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

 No virus found in this incoming message.
 Checked by AVG Free Edition.
 Version: 7.5.472 / Virus Database: 269.9.1/857 - Release Date: 20/06/2007
 14:18


 No virus found in this outgoing message.
 Checked by AVG Free Edition.
 Version: 7.5.472 / Virus Database: 269.9.1/857 - Release Date: 20/06/2007
 14:18


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Graphical Haskell

2007-06-22 Thread peterv
Hi,

Since nobody gave an answer on this topic, I guess it is insane to do it in
Haskell (at least for a newbie)? :)

Thanks for any info,
Peter

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of peterv
Sent: Wednesday, June 20, 2007 21:48
To: haskell-cafe@haskell.org
Subject: [Haskell-cafe] Graphical Haskell

In the book Haskell School of Expression, streams are nicely explained
using a graphical flow graph.

This is also done more or less in
http://research.microsoft.com/~simonpj/papers/marktoberdorf/Marktoberdorf.pp
t to explain monads and other concepts.

I would like to create a program that allows you to create such flow graphs,
and then let GHC generate the code and do type inference. 

I found a paper where Haskell is used to create a GUI application with
undo/redo etc for creating graphical Basian networks
(http://www.cs.uu.nl/dazzle/f08-schrage.pdf), so this gave me confidence
that I could it do all in Haskell.

Now, instead of generating Haskell code (which I could do first, would be
easier to debug), I would like to directly create an AST, and use an Haskell
API to communicate with GHC. 

I already found out that GHC indeed has such an API, but how possible is
this idea? Has this been done before? I only found a very old attempt at
this, confusingly also called Visual Haskell, see
http://ptolemy.eecs.berkeley.edu/%7Ejohnr/papers/visual.html, but I can't
find any source code for that project.

I did a similar project in C# that generated C++ code, so I've done it
before, just not in Haskell.

Thanks a lot,
Peter


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

No virus found in this incoming message.
Checked by AVG Free Edition. 
Version: 7.5.472 / Virus Database: 269.9.1/857 - Release Date: 20/06/2007
14:18
 

No virus found in this outgoing message.
Checked by AVG Free Edition. 
Version: 7.5.472 / Virus Database: 269.9.1/857 - Release Date: 20/06/2007
14:18
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Orthogonal Persistence in Haskell

2007-06-22 Thread Pasqualino 'Titto' Assini
Many thanks Claus for the extended explanation, it makes perfect sense.

For more info I will now turn to the papers :-)

Talking about serialisation, an interesting paper has just appeared on 
lambda-the-ultimate:

HOT Pickles
http://lambda-the-ultimate.org/node/2305

Regards,

 titto



On Friday 22 June 2007 00:28:53 Claus Reinke wrote:
  with orthogonal persistence, everything a program touches might
  persist, but usually, programs talk about the data being persistet (?),
  not about whether that data is currently temporary or in long-term
  storage. if you want to move such data between processes or storage
  areas, you move the reference, and the system handles serialisation/
  communication/deserialisation behind the scenes.
 
  This is interesting, could you elaborate on it?
  How would you get data to move around by moving its reference?

 more elaboration than the various papers, surveys, and phd theses
 listed in the references i provided?-) the idea is that i give you the
 reference, and you take care of looking at the data behind it,
 without me having to serialise the contents;-)

 but ok, lets see whether i can get the idea accross by example:

 a) suppose you want to move some x from list a to list b

 do you get the type of x, devise a type-specific traversal to
 serialise x from the source, move the flattened data from a to b,
 and deserialise x in the target?

 or do you just write:

 test = move ([1..4],[3..5])
 move (x:as) b = (as,x:bs)

 b) suppose you want to move some x from concurrent haskell
 process a to concurrent haskell process b

 do you get the type of x, devise a type-specific traversal to
 serialise x from the source, move the flattened data from a to b,
 and deserialise x in the target?

 or do you write something like:

 test = do { av-newEmptyMVar;
  bv-newEmptyMVar;
  forkIO (putMVar av [1..]);
  forkIO (takeMVar bv = print . take 10);
  move av bv }
 move av bv = takeMVar av = putMVar bv

 c) suppose you want to move some x from os process a to
 os process b

 do you get the type of x, devise a type-specific traversal to
 serialise x from the source, move the flattened data from a to b,
 and deserialise x in the target?

 yes. and if the type is not serialisable, you're stuck.

 d) suppose you want to move some x from os process a to
 an os file, for later retrieval in process b

 do you get the type of x, devise a type-specific traversal to
 serialise x from the source, move the flattened data from a to b,
 and deserialise x in the target?

 yes. and if the type is not serialisable, you're stuck.

 now, why are c/d so much more troublesome than a/b? i don't
 care whether the x to be moved is an integer, a matrix, a function,
 or the list of primes - i just want it to be moved from a to b. or
 rather, i move the reference to x, and the runtime system moves
 whatever representation is behind that, if a move is necessary,
 and without ever exposing that internal representation. and if i
 happen to move x into a long-term storage area, it will persist
 there for future reference, without further ado.

 much more about that idea in the papers i mentioned. or, if you
 prefer something more recent, have a look at the Clean papers:

 http://www.st.cs.ru.nl/Onderzoek/Publicaties/publicaties.html

 a selection of entries related to dynamics and first-class i/o:

 1997: 4. Pil, Marco, First Class File I/O
 2003: 7. Arjen van Weelden and Rinus Plasmeijer.
  Towards a Strongly Typed Functional Operating System.
 2003: 6. Martijn Vervoort and Rinus Plasmeijer.
  Lazy Dynamic Input/Output in the lazy functional language
 Clean 2004: 21. Arjen van Weelden, Rinus Plasmeijer.
 A Functional Shell that Dynamically Combines Compiled Code.

 hth,
 claus

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Collections

2007-06-22 Thread apfelmus
Thomas Conway wrote:
 On 6/22/07, Duncan Coutts [EMAIL PROTECTED] wrote:
 You might find that lazy IO is helpful in this case. The primitive that
 implements lazy IO is unsafeInterleaveIO :: IO a - IO a
 
 Personally, unsafeInterleaveIO is so horribly evil, that even just
 having typed the name, I'll have to put the keyboard through the
 dishwasher (see http://www.coudal.com/keywasher.php).

:D :D
Finally someone who fully understands the true meaning of the prefix
unsafe  ;)

 Note that using a Map will probably not help since it needs to
 read all the keys to be able to construct it so that'd pull
 in all the data from disk.

 Well, in the case I'm dealing with, the map can contain the current
 key from each postings vector, and the closure for reading the
 remainder of the vector. E.g. Map Key ([IO (Maybe Key)]).

In any case, you have to store as many keys as you have lists to sort,
but lazy mergesort will not hold on more than (length xs + 1) keys in
memory at a single moment in time and only force one new key per
retrieval. No lingering intermediate lists :)

In this situation, unsafeInterleaveIO is an easy way to carry this
behavior over to the IO-case:

 type Reader t = IO (Maybe t)
 type Writer t = t - IO ()

 readList :: Reader t - IO [t]
 readList m = unsafeInterleaveIO $ do
mx - m
case mx of
   Just x  - liftM (x:) $ readList m
   Nothing - return []

 mergesortIO :: Ord t = [Reader t] - Writer t - IO ()
 mergesortIO xs f = do
ys - mapM readList xs
mapM_ f $ mergesort ys

Here, readList creates only as many list elements as you demand,
similarly to getContents. Of course, it has the same problem as
getContents, namely that you can accidentally close the file before
having read all data. But this is applies to any on-demand approach be
it with IO or without.

Also, you can make the heap in mergesort explicit and obtain something
similar to your current approach with Data.Map. The observation is that
while mergesort does create a heap, its shape does not change and is
determined solely by (length xs).

-- convenient invariant:
--   the smaller element comes from the left child
  data Ord b = Heap m b = Leaf m b | Branch b (Tree a b) (Tree a b)

-- smart constructor
  branch :: Ord b = Tree m b - Tree m b - Tree m b
  branch x y
  | gx = gy  = Branch gx x y
  | otherwise = Branch gy y x
  where
  (gx,gy) = (getMin x, getMin y)

-- fromList is the only way to insert elements into a heap
  fromList :: Ord b = [(m,b)] - Heap m b
  fromList = foldtree1 branch . map (uncurry Leaf)

  getMin :: Heap m b - b
  getMin (Leaf _ b)  = b
  getMin (Branch b _ _ ) = b

  deleteMin :: Heap (Reader b) b - IO (Maybe (Heap (Reader b) b))
  deleteMin (Leaf m _) = m = return . fmap (Leaf m)
  deleteMin (Branch _ x y) = do
mx' - deleteMin x
return . Just $ case mx' of
   Just x' - branch x' y
   Nothing - y

  mergesortIO :: Ord t = [Reader t] - Writer t - IO ()
  mergesortIO xs f = ...

 Also, I need to support concurrent querying and updates,
 and trying to manage the locking is quite hard enough as it is,
 without trying to keep track of which postings vectors have closures
 pointing to them!

I guess you have considered Software Transactional Memory for atomic
operations?
   http://research.microsoft.com/~simonpj/papers/stm/index.htm

Also, write-once-read-many data structures (like lazy evaluation uses
them all the time) are probably very easy to get locked correctly.


Regards,
apfelmus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Type classes vs C++ overloading Re: [Haskell-cafe] Messing around with types [newbie]

2007-06-22 Thread Cristiano Paris

On 6/22/07, Tomasz Zielonka [EMAIL PROTECTED] wrote:
...


The problem is not that it can't tell whether 5.0 and 10 would fit Int
and Double (actually, they do fit), it's that it can't tell if they
won't fit another instance of FooOp.



You expressed the concept in more correct terms but I intended the same...
I'm starting to understand now.



Instances are duplicate if they have the same (or overlapping) instance
heads. An instance head is the thing after =. What's before = doesn't
count.



So, the context is irrelevant to distinguishing instances?


It seems that Num and Fractional are somewhat related. Any hint?

It's not important here, but indeed they are:
   class (Num a) = Fractional a where



I see. Thank you Tomasz.

Cristiano
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell version of ray tracer code is much slower than the original ML

2007-06-22 Thread Philip Armstrong

On Thu, Jun 21, 2007 at 08:42:57PM +0100, Philip Armstrong wrote:

On Thu, Jun 21, 2007 at 03:29:17PM -0400, Mark T.B. Carroll wrote:

Philip Armstrong [EMAIL PROTECTED] writes:
(snip)

Why on earth would you use -fexcess-precision if you're using Floats?
The excess precision only apples to Doubles held in registers on x86
IIRC. (If you spill a Double from a register to memory, then you lose
the extra precision bits in the process).


Some googling suggests that point 2 on
http://www.haskell.org/hawiki/FasterFloatingPointWithGhc
might have been what I was thinking of.


That's the old wiki. The new one gives the opposite advice! (As does
the ghc manual):

 http://www.haskell.org/ghc/docs/latest/html/users_guide/faster.html
 http://www.haskell.org/haskellwiki/Performance/Floating_Point


Incidentally, the latter page implies that ghc is being overly
pessimistic when compilling FP code without -fexcess-precision:

On x86 (and other platforms with GHC prior to version 6.4.2), use
 the -fexcess-precision flag to improve performance of floating-point
 intensive code (up to 2x speedups have been seen). This will keep
 more intermediates in registers instead of memory, at the expense of
 occasional differences in results due to unpredictable rounding.

IIRC, it is possible to issue an instruction to the x86 FP unit which
makes all operations work on 64-bit Doubles, even though there are
80-bits available internally. Which then means there's no requirement
to spill intermediate results to memory in order to get the rounding
correct.

Ideally, -fexcess-precision should just affect whether the FP unit
uses 80 or 64 bit Doubles. It shouldn't make any performance
difference, although obviously the generated results may be different.

As an aside, if you use the -optc-mfpmath=sse option, then you only
get 64-bit Doubles anyway (on x86).

cheers, Phil

--
http://www.kantaka.co.uk/ .oOo. public key: http://www.kantaka.co.uk/gpg.txt
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Graphical Haskell

2007-06-22 Thread Henning Thielemann

On Fri, 22 Jun 2007, peterv wrote:

 Since nobody gave an answer on this topic, I guess it is insane to do it in
 Haskell (at least for a newbie)? :)

It's certainly an interesting project. Since signal processing is much
like functional programming, a graphical Haskell editor could also serve
as a nice signal processing graph editor.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Graphical Haskell

2007-06-22 Thread Pasqualino 'Titto' Assini

On Friday 22 June 2007 11:21:31 Henning Thielemann wrote:
 On Fri, 22 Jun 2007, peterv wrote:
  Since nobody gave an answer on this topic, I guess it is insane to do it
  in Haskell (at least for a newbie)? :)

 It's certainly an interesting project. Since signal processing is much
 like functional programming, a graphical Haskell editor could also serve
 as a nice signal processing graph editor.

An existing example of which is CAL's Gem Cutter:

http://resources.businessobjects.com/labs/cal/gemcutter-techpaper.pdf 

titto
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] To yi or not to yi, is this really the question? A plea for a cooperative, ubiquitous, distributed integrated development system.

2007-06-22 Thread Claus Reinke

Most languages, even Java, have a reflection capability to dynamically
inspect an object.

_Even_ Java? That's a strange point of view considering how much money
went into this technology.


they didn't take reflection seriously at first, initially providing only a 
half-baked feature set; that state, and the transition period when they 
finally noticed that they actually needed better reflection support for 
their own tools was somewhat painful (was it around/before jdk 1.2? 
partially coevolving with jni, debuggers, beans, etc.?).



I also find it hard to believe that most languages have reflection,
especially those which are traditionally focused on efficiency and
compilation to native code, like C, C++, Fortran, Pascal, etc.


c did have something like 'thing', providing you with the address of
'thing's representation, and in more innocent times, with the ability to
read and rewrite that representation:-) c++ had templates, overloading

if you doubt the expressive power of even such restricted reflection
support, think of buffer overflow exploits or, in the scripting world,
of string injection attacks. these are negative examples, but they
demonstrate the potential of reflection support: to enable the 
unexpected, to support evolution of uses not originally planned for.



How many languages with reflection can you list?


you're kidding, right? 


lisp, prolog, smalltalk, clos' mop, java, javascript, sh, perl, ..
well, most shellsscripting languages, and to a (sometimes very)
limited extent, most languages

however, that's a bit vague, and i always mix up the directions, so 
let me try to pin down some terms, so that we're at least mixed up

the same way:-)

   - reification: from program/data to representation (reify, quote)
   - reflection: from representation to program/data (eval, splice)
   - meta-programming: programs operating on program representations
   - reflective programming: programs operating on their own representations

unless i'm talking about specific operations/instances of the scheme,
i tend to refer to the last item, encompassing all others, when i talk
about reflection in programming languages.

now, if you consider the old game of quines (programs printing their 
own representation), most turing-complete languages provide some
reflection, the only question is, is it well supported or so awkward 
that its only uses are limited to theory papers and obfuscated code 
competitions?



I think the reasons are mostly insufficient resources and not enough
interest to justify the effort. 


the former, yes. the latter, no. the lisp/smalltalk folks have known
all along, the java folks have found out the hard way, and some of
us haskellers are still trying to pretend we do not know, even though
we've been bitten often enough:

-   good reflection support makes it easy to develop tools
   and to experiment with language extensions

-   lack of good reflection support causes mutually inconsistent
   complex workarounds trying to reinvent reflection support
   the hard way while seriously hampering tool development

anyone who tried to develop tools for haskell before the haskell
implementations started to provide haskell apis to their inner
workings can attest to the difficulties. whereas, the better these
apis get, the less current developers are even aware that there
used to be problems of that kind.

and at the language level, we still use preprocessors, including
the c one:-( btw, languages with adequate reflection support tend 
not to have separate preprocessors), we do have partial efforts 
like template haskell, data/typeable (and the generic techniques 
based on them, including scrap your boilerplate), in fact the 
whole type-class-level programming area could be said to be
about type-based meta-programming generating functional 
programs from type-level proofs, then there are pragmas and 
implicit insertion of program markers for profiling/coverage 
analysis/debugging, complete program transformations for 
profiling/tracing/debugging, multiple separate frontends and 
ast types, language.haskell, hsx, programatica, poor man's 
versions of type dynamic, dynamic loading and runtime code 
generation, data.dynamic, hs-plugins, ghc api, yhc api, hugs 
server api, .. ah, well, you get the idea?-)


perhaps the most important aspect of reflection support is to
notice that there is a common theme in all these separate
efforts, and a common support base that could help to make
all of those efforts and similar tools/extensions/applications
easier to develop, with the base designed to be consistent
and maintained continuosly, in a single place, instead of 
developed and forgotten again and again.


claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Haskell version of ray tracer code is much slower than the original ML

2007-06-22 Thread Simon Marlow

Philip Armstrong wrote:

On Thu, Jun 21, 2007 at 08:42:57PM +0100, Philip Armstrong wrote:

On Thu, Jun 21, 2007 at 03:29:17PM -0400, Mark T.B. Carroll wrote:



That's the old wiki. The new one gives the opposite advice! (As does
the ghc manual):

 http://www.haskell.org/ghc/docs/latest/html/users_guide/faster.html
 http://www.haskell.org/haskellwiki/Performance/Floating_Point


Incidentally, the latter page implies that ghc is being overly
pessimistic when compilling FP code without -fexcess-precision:

On x86 (and other platforms with GHC prior to version 6.4.2), use
 the -fexcess-precision flag to improve performance of floating-point
 intensive code (up to 2x speedups have been seen). This will keep
 more intermediates in registers instead of memory, at the expense of
 occasional differences in results due to unpredictable rounding.

IIRC, it is possible to issue an instruction to the x86 FP unit which
makes all operations work on 64-bit Doubles, even though there are
80-bits available internally. Which then means there's no requirement
to spill intermediate results to memory in order to get the rounding
correct.


For some background on why GHC doesn't do this, see the comment MORE FLOATING 
POINT MUSINGS... in


  http://darcs.haskell.org/ghc/compiler/nativeGen/MachInstrs.hs

The main problem is floats: even if you put the FPU into 64-bit mode, your float 
operations will be done at 64-bit precision.  There are other technical problems 
that we found with doing this, the comment above elaborates.


GHC passes -ffloat-store to GCC, unless you give the flag -fexcess-precision. 
The idea is to try to get reproducible floating-point results.  The native code 
generator is unaffected by -fexcess-precision, but it produces rubbish 
floating-point code on x86 anyway.



Ideally, -fexcess-precision should just affect whether the FP unit
uses 80 or 64 bit Doubles. It shouldn't make any performance
difference, although obviously the generated results may be different.



As an aside, if you use the -optc-mfpmath=sse option, then you only
get 64-bit Doubles anyway (on x86).


You probably want SSE2.  If I ever get around to finishing it, the GHC native 
code generator will be able to generate SSE2 code on x86 someday, like it 
currently does for x86-64.  For now, to get good FP performance on x86, you 
probably want


  -fvia-C -fexcess-precision -optc-mfpmath=sse2

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Haskell version of ray tracer code is much slower than the original ML

2007-06-22 Thread Simon Marlow

Philip Armstrong wrote:

On Thu, Jun 21, 2007 at 08:15:36PM +0200, peterv wrote:
So float math in *slower* than double math in Haskell? That is 
interesting.
Why is that?   

BTW, does Haskell support 80-bit long doubles? The Intel CPU seems 
to use

that format internally.


As I understand things, that is the effect of using -fexcess-precision.

Obviously this means that the behaviour of your program can change
with seemingly trivial code rearrangements,


Not just code rearrangements: your program will give different results depending 
on the optimisation settings, whether you compile with -fvia-C or -fasm, and the 
results will be different from those on a machine using fixed 32-bit or 64-bit 
precision floating point operations.


Cheers,
Simon

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Collections

2007-06-22 Thread Thomas Conway

On 6/22/07, apfelmus [EMAIL PROTECTED] wrote:

I guess you have considered Software Transactional Memory for atomic
operations?
   http://research.microsoft.com/~simonpj/papers/stm/index.htm

Also, write-once-read-many data structures (like lazy evaluation uses
them all the time) are probably very easy to get locked correctly.


STM was *the* justification to the mgt for letting me use Haskell
rather than C++. :-)

However, you do need to take care, because in this context it would be
easy to end up creating great big transactions which conflict with one
another, which quite aside from wasting CPU on retries, can in extreme
cases lead to starvation. A bit like laziness, STM is fantastic for
correctness, but can be a bit obtuse for performance. With that
proviso, I think STM is better than sliced bread.[*]

Incidentally, I read Herlihy's papers on lock free data structures
early on in my work on parallelism and concurrency for Mercury in the
mid 90's. What a shame I didn't have the wit to understand them
properly at the time, or Mercury might have had STM 10 years ago. :-)

T.
[*] People who know me well, would realize that since I bake my own
bread and slice it with a bread-knife myself, comparison to sliced
bread may be faint praise. It isn't.
--
Dr Thomas Conway
[EMAIL PROTECTED]

Silence is the perfectest herald of joy:
I were but little happy, if I could say how much.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Graphical Haskell

2007-06-22 Thread Claus Reinke

Since nobody gave an answer on this topic, I guess it is insane to do it in
Haskell (at least for a newbie)? :)


not necessarily; we're all waiting for your first release?-)


I would like to create a program that allows you to create such flow graphs,
and then let GHC generate the code and do type inference. 


spun off from dazzle, which you've found, there's also blobs:

   http://www.cs.york.ac.uk/fp/darcs/Blobs/


Now, instead of generating Haskell code (which I could do first, would be
easier to debug), I would like to directly create an AST, and use an Haskell
API to communicate with GHC. 


one thing to consider: things get a little more tricky when the generated
haskell and dynamically loaded code is meant to do graphics (such as 
updating the original diagram with the state of the simulation). in particular, 
check that the gui framework actually works via that more circuituous route 
(similar problems to running in ghci instead of ghc).



I already found out that GHC indeed has such an API, but how possible is
this idea? Has this been done before? 


the ghc api is meant to support this kind of endeavours, and it isn't frozen
yet, either: the ghc team is happy to receive feedback about things that work 
or things that could work better.


before the ghc api, before blobs (after dazzle, though;), i did an embedding
of haskell-coloured petri nets in haskell, with a very simplistic graphical
net editor on top of wxhaskell, which generated haskell code for the net,
then called ghci to type-check and run the resulting code with a copy of
the original net graphics to update during simulation (poor man's reflection:):

   http://www.cs.kent.ac.uk/people/staff/cr3/HCPN/

it worked, but some things were annoying: 


- no high-level support for writing graph editors in wxhaskell;
   blobs aims to fix that

- awkward meta-programming and runtime reflection;
   ghc api should help a lot (but i can't see anything wrong with
   letting it work on generated source code first; optimization can
   come latter)

- wxhaskell encourages low-level dependencies, at least when
   you're writing your first wxhaskell programs, because it can
   be rather difficult just to find the function you need, you're
   tempted to use it right there, just to see if it works, and leave
   cleaning up for later, which never comes; 

   gui frameworks are worse than the io monad; try to abstract 
   and limit your uses of gui lib features to as few modules as 
   possible; nicer code, easier to switch to different framework


- abi incompatibility!!^*L$W%*^%*! 


   sorry,-) but that has become the deal breaker for me; it is
   bad enough that there are two major haskell gui libs out
   there, as it means that your clients may have the wrong one
   or none at all, and need to install the one you need; but,
   worse than that, whenever there's a new ghc release, 
   everybody needs to rebuild their gui libs, and if you

   have the latest ghc release and a recent ghc head installed,
   you even need separate copies of the gui lib, etc, etc. 


   so you use a nice high-level language to get a lot done
   in very little very portable code, but instead of distributing
   a few pages of haskell, with build as simple as ghc --make,
   you have to worry about rather huge gui lib installations,
   and you have to worry anew for each ghc release..

btw, i am thinking about reviving my hcpn project, to make 
use of the ghc api, but i'd like to get rid of the binary gui lib 
dependency first. my current take on this is gui lib? no, 
thanks, i'm just  browsing, if you know what i mean?-)


hth,
claus


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Collections

2007-06-22 Thread ajb
G'day.

Quoting Andrew Coppin [EMAIL PROTECTED]:

 True enough - but that's a rather specific task.

There are a lot of rather specific tasks out there.

Cheers,
Andrew Bromage
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Haskell version of ray tracer code is much slower than the original ML

2007-06-22 Thread Philip Armstrong

On Fri, Jun 22, 2007 at 01:16:54PM +0100, Simon Marlow wrote:

Philip Armstrong wrote:

IIRC, it is possible to issue an instruction to the x86 FP unit which
makes all operations work on 64-bit Doubles, even though there are
80-bits available internally. Which then means there's no requirement
to spill intermediate results to memory in order to get the rounding
correct.


For some background on why GHC doesn't do this, see the comment MORE 
FLOATING POINT MUSINGS... in


  http://darcs.haskell.org/ghc/compiler/nativeGen/MachInstrs.hs


Twisty. I guess 'slow, but correct, with switches to go faster at the
price of correctness' is about the best option.

You probably want SSE2.  If I ever get around to finishing it, the GHC 
native code generator will be able to generate SSE2 code on x86 someday, 
like it currently does for x86-64.  For now, to get good FP performance on 
x86, you probably want


  -fvia-C -fexcess-precision -optc-mfpmath=sse2


Reading the gcc manpage, I think you mean -optc-msse2
-optc-mfpmath=sse. -mfpmath=sse2 doesn't appear to be an option.

(I note in passing that the ghc darcs head produces binaries from
ray.hs which are about 15% slower than ghc 6.6.1 ones btw. Same
optimisation options used both times.)

cheers, Phil

--
http://www.kantaka.co.uk/ .oOo. public key: http://www.kantaka.co.uk/gpg.txt
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell version of ray tracer code is much slower than the original ML

2007-06-22 Thread Dougal Stanton

On 22/06/07, Claus Reinke [EMAIL PROTECTED] wrote:


perhaps this should be generalised to ghc flag profiles, to cover
things like '-fno-monomorphism-restriction -fno-mono-pat-binds'
or '-fglasgow-exts -fallow-undecidable-instances; and the like?


You just *know* someone's gonna abuse that to make a genuine
-funroll-loops, right? ;-)

D.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell version of ray tracer code is much slower than the original ML

2007-06-22 Thread Claus Reinke

  -fvia-C -fexcess-precision -optc-mfpmath=sse2


is there, or should there be a way to define -O profiles for ghc?
so that -O would refer to the standard profile, -Ofp would refer
to the combination above as a floating point optiimisation profile,
other profiles might include things like -funbox-strict-fields, and
-Omy42 would refer to my own favourite combination of flags..

perhaps this should be generalised to ghc flag profiles, to cover
things like '-fno-monomorphism-restriction -fno-mono-pat-binds'
or '-fglasgow-exts -fallow-undecidable-instances; and the like?

just a thought,
claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell version of ray tracer code is muchslower than the original ML

2007-06-22 Thread Claus Reinke

on second thought, user-defined profiles are a two-edged sword,
negating the documentation advantages of in-source flags. better to
handle that in the editor/ide. but predefined flag profiles would still
seem to make sense?

there is something wrong about this wealth of options. it is great 
that one has all that control over details, but it also makes it more 
difficult to get things right (eg, i was surprised that -O doesn't 
unbox strict fields by default). even a formula one driver doesn't 
control every lever himself, that's up to the team.


for optimisations, i used to have a simple picture in mind (from
my c days, i guess?), when ghci is no longer fast enough, that is:

no -O: standard executables are fast enough, thank you

-O: standard executables aren't fast enough, do something
   about it, but don't bother me with the details

-O2: i need your best _safe_ optimisation efforts, and i'm 
   prepared to pay for that with longer compilation times


-O3: i need your absolute best optimisation efforts, and i'm 
   prepared to verify myself that optimisations that cannot 
   automatically be checked for safety have no serious negative 
   effect on the results (it would be nice if you told me which

   potentially unsafe optimisations you used in compilation)

on top of that, as an alternative to -O3, specific tradeoffs would
be useful, where i specify whether i want to optimize for space
or for time, or which kinds of optimization opportunities the
compiler should pay attention to, such as strictness, unboxing,
floating point ops, etc.. but even here i wouldn't want to give
platform-specific options, i'd want the compiler to choose the
most appropriate options, given my specified tradeoffs and
emphasis, taking into account platform and self-knowledge.

so, i'd say -Ofp, and the compiler might pick:


  -fvia-C -fexcess-precision -optc-mfpmath=sse2


if i'm on a platform and compiler version where that is an 
appropriate selection of flags to get the best floating point

performance. and it might pick a different selection of flags
on a different platform, or with a different compiler version.


perhaps this should be generalised to ghc flag profiles, to cover
things like '-fno-monomorphism-restriction -fno-mono-pat-binds'
or '-fglasgow-exts -fallow-undecidable-instances; and the like?


that is a slightly different story, and it might be useful (a) to 
provide flag groups (-fno-mono*) and (b) to specify implication

(just about every language extension flag implies -fglasgow-exts,
so there's no need to specify that again, and there might be
other opportunities for reducing groups of options with a
single maximum in the implication order; one might even 
introduce pseudo-flags for grouping, such as -fhaskell2;-).


claus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Shouldn't this be lazy???

2007-06-22 Thread Olivier Boudry

Hi all,

I'm playing with the TagSoup library trying to extract links to
original pictures from my Flickr Sets page. This programs first loads
the Sets page, open links to each set, get links to pictures and then
search for original picture link (see steps in main function).

It does the job, but for the tests I just wanted to take 10 links to
reduce the time the program runs. Just hoping that haskell laziness
would magically take the minimum amount of data required to get the
first 10 links out of this set of pages.

I did this replacing:
  (putStrLn . unlines . concat) origLinks
with
  (putStrLn . unlines . take 10 . concat) origLinks
in the main function.

With the last version of that line, I effectively only get 10 links
but the runtime is exactly the same for both main functions.

As I'm a newbie haskell programmer I certainly missing something.

By the way I know Flickr has an api I could use, but the purpose was
playing with TagSoup.

Thanks for any advice.

Olivier.

Here's the code:

module Main where

import Data.Html.TagSoup
import Control.Monad (liftM)
import Data.List (isPrefixOf, groupBy)
import Data.Maybe (mapMaybe)
import System (getArgs)
import System.Time
import IO (hPutStrLn, stderr)

base= http://www.flickr.com;
setsUrl name = /photos/ ++ name ++ /sets/

main :: IO ()
main = do
   args  - getArgs
   tStart- getClockTime
   setLinks  - getLinksByAttr (class, Seta) (base ++ setsUrl (args !! 0))
   picLinks  - mapM (getLinksByAttr (class, image_link)) setLinks
   origLinks - mapM (getLinksAfterImgByAttr (src,
http://l.yimg.com/www.flickr.com/images/icon_download.gif;)) $
(mapMaybe linkToOrigSize . concat) picLinks
   (putStrLn . unlines . concat) origLinks
   tEnd  - getClockTime
   hPutStrLn stderr ( timeDiffToString $ diffClockTimes tEnd tStart )

-- | extract all links from a tag types having given attribute
getLinksByAttr :: (String, String) - String - IO [String]
getLinksByAttr attr url = do
   sects - getSectionsByTypeAndAttr a attr url
   return $ hrefs sects

-- | get a tags following a img having a specific attribute
getLinksAfterImgByAttr :: (String, String) - String - IO [String]
getLinksAfterImgByAttr attr url = do
   sects - getSectionsByTypeAndAttr img attr url
   return $ hrefs $ map (dropWhile (not . isTagOpen) . drop 1) sects

-- | create sections from tag type and attribute
getSectionsByTypeAndAttr :: String - (String, String) - String - IO [[Tag]]
getSectionsByTypeAndAttr tagType attr url = do
   tags - liftM parseTags $ openURL $ url
   (return . filterByTypeAndAttr tagType attr) tags
 where
   filterByTypeAndAttr :: String - (String, String) - [Tag] - [[Tag]]
   filterByTypeAndAttr t a = sections (~== TagOpen t [a])

-- | extract href values from sections of a tags
hrefs :: [[Tag]] - [String]
hrefs = map (addBase . fromAttrib href . head)
 where
   addBase :: String - String
   addBase s | http://; `isPrefixOf` s = s
   addBase s | otherwise= base ++ s

-- | transform a link to a picture into a link to the original size picture
linkToOrigSize :: String - Maybe String
linkToOrigSize link =
   if parts !! 3 == photos then
   Just $ newUrl parts
   else
   Nothing
 where
   parts = map tail $ groupBy (const(/='/')) link
   newUrl p = http://www.flickr.com/photo_zoom.gne?id=; ++ p !! 5 ++
size=ocontext= ++ p !! 7
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Graphical Haskell

2007-06-22 Thread peterv
Wow thanks for all the info! This certainly can get me started.

And yet I have some more questions (sorry!):

- Unfortunately this project won't be open source; if my first tests are
successful, I will try to convince my employer (who wants to develop such a
graphical language) to use Haskell for building a prototype instead of
C#/F#/Java. Can Haskell be used for creating commercial projects? When the
product is released, it *will* be downloadable for free, but the source code
won't be (most likely). 

- If my employer agrees on Haskell, and when our first round of investment
is completed, we will be looking for a couple of good Haskell developers.
What would be the best place to look for good Haskell developers? This
mailing list? Ideally development will have to take place in
Antwerp/Belgium, although we might work with remotely located freelancers.
We prefer agile development (SCRUM, and maybe we will be doing extreme
programming, to be decided) with a small group of capable people. To get an
idea of what my employer is doing, visit http://www.nazooka.com. My
colleagues and I wrote most of the software for doing this back in the
1990s, and of course the real work is done by 3D graphics artists.

- Regarding GUIs, does a real FP-style GUI exist instead of those wrappers
around OO GUIs? I did some searches but besides some research papers about
FranTk and wxFruit I only found wrappers such as Gtk2Hs and wxHaskell that
use a lot of monadic IO. It's very hard for an old school OO style
programmer like myself to switch my mind into lazy functional programming
(although I think I've seen the light yesterday when digging deep into the
FRP of the SOE book, LOL ;-).
 
- Functional reactive programming like looks cool (I only looked at the SOE
book, must still look at Yampa), but somehow I feel this is still an active
area of research. What is the latest work on FRP (for GUIs / games /
animation / simulations...)? What are the major open issues? 

- Regarding performance (for real-time simulations, not GUIs), I think the
garbage collector will get really stressed using FRP because of all those
infinite lazy streams; my gut feeling says a generational garbage collector
like Microsoft's .NET could help here (but the gut is often wrong, see
http://www.youtube.com/watch?v=RF3m3f9iMRc for an laugh ;). Regarding the
GC, is http://hackage.haskell.org/trac/ghc/wiki/GarbageCollectorNotes still
up-to-date?  

Okay, that's enough for now. More is less...

- Peter

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Claus Reinke
Sent: Friday, June 22, 2007 14:02
To: haskell-cafe@haskell.org
Subject: Re: [Haskell-cafe] Graphical Haskell

 Since nobody gave an answer on this topic, I guess it is insane to do it
in
 Haskell (at least for a newbie)? :)

not necessarily; we're all waiting for your first release?-)

 I would like to create a program that allows you to create such flow
graphs,
 and then let GHC generate the code and do type inference. 

spun off from dazzle, which you've found, there's also blobs:

http://www.cs.york.ac.uk/fp/darcs/Blobs/
 
 Now, instead of generating Haskell code (which I could do first, would be
 easier to debug), I would like to directly create an AST, and use an
Haskell
 API to communicate with GHC. 

one thing to consider: things get a little more tricky when the generated
haskell and dynamically loaded code is meant to do graphics (such as 
updating the original diagram with the state of the simulation). in
particular, 
check that the gui framework actually works via that more circuituous route 
(similar problems to running in ghci instead of ghc).
 
 I already found out that GHC indeed has such an API, but how possible is
 this idea? Has this been done before? 

the ghc api is meant to support this kind of endeavours, and it isn't frozen
yet, either: the ghc team is happy to receive feedback about things that
work 
or things that could work better.

before the ghc api, before blobs (after dazzle, though;), i did an embedding
of haskell-coloured petri nets in haskell, with a very simplistic graphical
net editor on top of wxhaskell, which generated haskell code for the net,
then called ghci to type-check and run the resulting code with a copy of
the original net graphics to update during simulation (poor man's
reflection:):

http://www.cs.kent.ac.uk/people/staff/cr3/HCPN/

it worked, but some things were annoying: 

- no high-level support for writing graph editors in wxhaskell;
blobs aims to fix that

- awkward meta-programming and runtime reflection;
ghc api should help a lot (but i can't see anything wrong with
letting it work on generated source code first; optimization can
come latter)

- wxhaskell encourages low-level dependencies, at least when
you're writing your first wxhaskell programs, because it can
be rather difficult just to find the function you need, you're
tempted to use it right there, just to see if 

Re: [Haskell-cafe] A probably-stupid question about a Prelude implementation.

2007-06-22 Thread Neil Mitchell

Hi Michael,

You're wrong :)


foldr (||) False (repeat True)


Gives:


True


Remember that in Haskell everything is lazy, which means that the ||
short-circuits as soon as it can.

Thanks

Neil


On 6/22/07, Michael T. Richter [EMAIL PROTECTED] wrote:


 So, I'm now going over the code in the 'Report with a fine-toothed comb
because a) I'm actually able to read it now pretty fluently and b) I want to
know what's there in detail for a project I'm starting.  I stumbled across
this code:

 any :: (a - Bool) - [a] - Bool
any p = or . map p

or :: [Bool] - Bool
or = foldr (||) False


Now I see how this works and it's all elegant and clear and all that.  But
I have two nagging problems with it (that are likely related):

   1. Using foldr means I'll be traversing the whole list no matter
   what.  This implies (perhaps for a good reason) that it can only work on a
   finite list.
   2. I don't see any early bale-out semantics.  The way I read this
   it's going to expand a whole list of n and perform n comparisons (including
   the one with the provided False).


Considering that I only need a single True result to make the whole
expression true, I'd have expected there to be some clever semantics to
allow exactly this.  But what I'm seeing, unless I'm really misreading the
code, is that if I give it a list of a million boolean expressions, it will
cheerfully evaluate these million boolean expressions and perform a million
calls to (||) before giving me a result.

Please tell me I'm wrong and that I'm missing something?

  --
*Michael T. Richter* [EMAIL PROTECTED] (*GoogleTalk:*
[EMAIL PROTECTED])
*There are two ways of constructing a software design. One way is to make
it so simple that there are obviously no deficiencies. And the other way is
to make it so complicated that there are no obvious deficiencies. (Charles
Hoare)*

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A probably-stupid question about a Prelude implementation.

2007-06-22 Thread Philip Armstrong

On Fri, Jun 22, 2007 at 11:31:17PM +0800, Michael T. Richter wrote:

   1. Using foldr means I'll be traversing the whole list no matter what.
  This implies (perhaps for a good reason) that it can only work on a
  finite list.


foldr is lazy.


  Please tell me I'm wrong and that I'm missing something?


You are wrong and you're missing something :)

compare: 
 any ((==) 2) [1,2,3]

and
 any ((==) 2) [1..]

any ((==) 0) [1..] will go _|_ of course.

Phil

--
http://www.kantaka.co.uk/ .oOo. public key: http://www.kantaka.co.uk/gpg.txt
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A probably-stupid question about a Prelude implementation.

2007-06-22 Thread Dougal Stanton

On 22/06/07, Michael T. Richter [EMAIL PROTECTED] wrote:


 So, I'm now going over the code in the 'Report with a fine-toothed comb 
because a) I'm actually able to read it now pretty fluently and b) I want to 
know what's there in detail for a project I'm starting.  I stumbled across this 
code:


 any :: (a - Bool) - [a] - Bool
 any p = or . map p

 or :: [Bool] - Bool
 or = foldr (||) False

 Now I see how this works and it's all elegant and clear and all that.  But I 
have two nagging problems with it (that are likely related):

Using foldr means I'll be traversing the whole list no matter what.  This 
implies (perhaps for a good reason) that it can only work on a finite list.
I don't see any early bale-out semantics.  The way I read this it's going to 
expand a whole list of n and perform n comparisons (including the one with the 
provided False).



Well, try it:

Prelude any (10) [1..]
True

By way of contrast, this (doesn't) work as you expected:

Prelude let any' p = foldl (||) False . map p
Prelude any' (10) [1..]
^C
Interrupted.

A left fold will keep on going with an infinite list in this case.

D.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A probably-stupid question about a Prelude implementation.

2007-06-22 Thread Chris Kuklewicz

Neil Mitchell wrote:

Hi Michael,

You're wrong :)

  foldr (||) False (repeat True)

Gives:

  True

Remember that in Haskell everything is lazy, which means that the || 
short-circuits as soon as it can.


Thanks

Neil



Specifically it is graph reduced like this:

or [F,T,F,F...]

foldr (||) F [F,T,F,F...]

F || foldr (||) F [T,F,F...]

foldr (||) F [T,F,F...]

T || foldr (||) F [F,F...]

T

The last line is because (T || _ = T) and lazyness

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell version of ray tracer code is much slower than the original ML

2007-06-22 Thread Philip Armstrong

On Thu, Jun 21, 2007 at 01:45:04PM +0100, Philip Armstrong wrote:

As I said, I've tried the obvious things  they didn't make any
difference. Now I could go sprinkling $!, ! and seq around like
confetti but that seems like giving up really.


OK. Looks like I was mistaken. Strictness annotations *do* make a
difference! Humph. Wonder what I was doing wrong yesterday?

Anyway timings follow, with all strict datatypes in the Haskell
version:

Langauge File Time in seconds
Haskell  ray.hs   38.2
OCamlray.ml   23.8 
g++-4.1  ray.cpp  12.6


(ML  C++ Code from
http://www.ffconsultancy.com/languages/ray_tracer/comparison.html)

Gcc seems to have got quite a bit better since Jon last benchmarked
this code.

Phil

--
http://www.kantaka.co.uk/ .oOo. public key: http://www.kantaka.co.uk/gpg.txt
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] A probably-stupid question about a Prelude implementation.

2007-06-22 Thread Michael T. Richter
So, I'm now going over the code in the 'Report with a fine-toothed comb
because a) I'm actually able to read it now pretty fluently and b) I
want to know what's there in detail for a project I'm starting.  I
stumbled across this code:


any :: (a - Bool) - [a] - Bool
any p = or . map p

or :: [Bool] - Bool
or = foldr (||) False


Now I see how this works and it's all elegant and clear and all that.
But I have two nagging problems with it (that are likely related):

 1. Using foldr means I'll be traversing the whole list no matter
what.  This implies (perhaps for a good reason) that it can only
work on a finite list.
 2. I don't see any early bale-out semantics.  The way I read this
it's going to expand a whole list of n and perform n comparisons
(including the one with the provided False).


Considering that I only need a single True result to make the whole
expression true, I'd have expected there to be some clever semantics to
allow exactly this.  But what I'm seeing, unless I'm really misreading
the code, is that if I give it a list of a million boolean expressions,
it will cheerfully evaluate these million boolean expressions and
perform a million calls to (||) before giving me a result.

Please tell me I'm wrong and that I'm missing something?

-- 
Michael T. Richter [EMAIL PROTECTED] (GoogleTalk:
[EMAIL PROTECTED])
There are two ways of constructing a software design. One way is to make
it so simple that there are obviously no deficiencies. And the other way
is to make it so complicated that there are no obvious deficiencies.
(Charles Hoare)


signature.asc
Description: This is a digitally signed message part
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Shouldn't this be lazy???

2007-06-22 Thread Malcolm Wallace
Olivier Boudry [EMAIL PROTECTED] wrote:

 I did this replacing:
(putStrLn . unlines . concat) origLinks
 with
(putStrLn . unlines . take 10 . concat) origLinks

Unfortunately, 'origLinks' has already been computed in full, before the
'take 10' applies to it.  Why?  Because 'origLinks' is the result of an
I/O action, which forces it:

 main = do ...
   origLinks - mapM (getLinksAfterImgByAttr ...) picLinks

What you really want to do is to trim the picLinks before you download
them. e.g.

 main = do ...
   origLinks - mapM (getLinksAfterImgByAttr ...) (take 10 picLinks)

Regards,
Malcolm
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Shouldn't this be lazy???

2007-06-22 Thread Marc Weber
 It does the job, but for the tests I just wanted to take 10 links to
 reduce the time the program runs. Just hoping that haskell laziness
 would magically take the minimum amount of data required to get the
 first 10 links out of this set of pages.

I haven't read the details of the post. But I think its due to lazy
operations not beeing lazy by default.

Have a look at this thread it might help
http://groups.google.com/group/fa.haskell/browse_thread/thread/5deaee07a8398d07/d5b3c85aa8c2860c?lnk=stq=Marc+Weber+lazyIOrnum=1hl=en#d5b3c85aa8c2860c

All which is done is throwing in a unsafeInterleaveIO at some locations.
Because I didn't want to implement all list functions again I had the
idea of inventing the LazyIO monad (which calls unsafeInterleaveIO
automatically) But doing this to often resulted in no list processing at
all ;)
I hope that this gives you a hint to look more stuff up on the wiki
using the search etc.
If this didn't help post again and I'll have a closer look.

Marc Weber
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Type-level Programming

2007-06-22 Thread Vincenz Syntactically

Dear all,

Recently I was playing around with encoding matrices in the type-level
system.  Thereby one can enable the multiplication of matrices.  The general
idea (which can be read about at (
http://notvincenz.blogspot.com/2007/06/generalized-matrix-multiplication.html)
is that there is more than one way to multiplly a matrix.

Given two matrices A and B, with M and N dimensions:

a_1*...*a_m and b_1*...*b_n  then whenever the last L dimensions of A match
the first L dimensions of B, they can be multiplied to have a matrix of
dimension:

a_1*..*a_(m-l)*b_(l+1)*...*b_n

What one does is a dot-product on those middle L dimensions.  This is what I
tried to do in the code in the blogpost.  However, I was unable to formulate
the constraints for the final multiplication class that does the actual
proper cross-multiplication.

Is this at all possible, or was I chasing ghosts?

Best regards,
Christophe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Shouldn't this be lazy???

2007-06-22 Thread Olivier Boudry

Reading code like the following:

main = do
 s - getContents
 let r = map processIt (lines s)
 putStr (unlines r)

I was thinking all IO operations were lazy. But in fact it looks like
getContents is lazy by design but not the whole IO stuff.

Thank you all for your helpful answers,

Olivier.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Shouldn't this be lazy???

2007-06-22 Thread Olivier Boudry

Marc,

Thanks for the link. Your LazyIO monad is really interesting. Do you
know if this construct exists in GHC? (this question was left open in
this thread)

Olivier.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Shouldn't this be lazy???

2007-06-22 Thread Olivier Boudry

On 6/22/07, David Roundy [EMAIL PROTECTED] wrote:

Or make this lazy with:

 main = do ...
   origLinks - mapM (unsafeInterleaveIO . getLinksAfterImgByAttr ...) 
picLinks
--
David Roundy
Department of Physics
Oregon State University


Just for info I used your tip to bring laziness into the function that
fetches the URLs. Work great and lazy now!

-- | create sections from tag type and attribute
getSectionsByTypeAndAttr :: String - (String, String) - String - IO [[Tag]]
getSectionsByTypeAndAttr tagType attr url = do
   tags - unsafeInterleaveIO $ liftM parseTags $ openURL $ url
   (return . filterByTypeAndAttr tagType attr) tags
 where
   filterByTypeAndAttr :: String - (String, String) - [Tag] - [[Tag]]
   filterByTypeAndAttr t a = sections (~== TagOpen t [a])

Thanks,

Olivier.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Plugin Problem - Weirder

2007-06-22 Thread Daniel Fischer
Am Freitag, 22. Juni 2007 04:29 schrieb Donald Bruce Stewart:

 The file system was down here, sorry.  Should be up now.

Ah, just unlucky timing.
darcs got, installed, all well.

 -- Don

Thanks,
Daniel
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Telling the time

2007-06-22 Thread Andrew Coppin

Marc Weber wrote:

On Thu, Jun 21, 2007 at 09:15:12PM +0100, Andrew Coppin wrote:
  

Greetings.

Is there a standard library function anywhere which will parse a string 
into some kind of date/time representation? And, further, is there some 
function that will tell me how many seconds elapsed between two such times?


I know about Data.Time.*
and System.Time ( tdSec . diffClockTimes )

For parsing there is the library written by bringert:
http://www.cs.chalmers.se/~bringert/darcs/parsedate/

I don't know wether it is in the library index on the haskell.org.
If not we should add it. I was'nt able to find it there.
  


OK, I'll try a deeper look...

(I see there's a giant pile of modules to do with dates and times, but I 
can't make much sense out of them - and in at least one place, the 



The trouble is that time processing can be complicated if you want to
pay attention to leap seconds/ years etc. leap seconds can't be known in
advance etc.
  


I don't care about leap seconds - and neither does the hardware clock on 
the server that generates these logs. ;-) I just want to find out how 
long each step took...


documentation on the Haskell website doesn't actually match what's 
installed on my computer!)


That's why I'm reading the source all the time ;)
  


Oh... goodie...

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Odd lack of laziness

2007-06-22 Thread Andrew Coppin

Chaddaï Fouché wrote:

You should be using BS.null f rather than BS.length f  0.

While we're on the subject... anybody know a neat way to check, say, 
whether a list contains exactly 1 element? (Obviously pattern matching 
can do it, but that requires big case-expressions...)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Collections

2007-06-22 Thread Andrew Coppin

Dan Piponi wrote:

Andrew said:


True enough - but that's a rather specific task. I'm still not seeing
vast numbers of other uses for this...


Graphs are one of the most ubiquitous structures in the whole of
computer science. Whether you're representing dataflows, or decoding
error-correcting codes, or decomposing an almost block matrix into
independent parts for multiprocessing, or figuring out which registers
to spill in a compiler, or programming neural networks, or finding the
shortest path between two cities, or trying to find dependencies in a
sequence of tasks, or constructing experimental designs, or using an
expert system to diagnose disease symptoms, or trying to find optimal
arrangements of marriage partners, or a million other tasks, graphs
appear everywhere!


I see *trees* around the place a lot, but not general graphs.

Maybe it's just the type of problems I attempt to solve?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Collections

2007-06-22 Thread Andrew Coppin

Lennart Augustsson wrote:

It's not that broken,  It was designed by people from the FP community. :)


OMG... A Java feature designed by people who know stuff about stuff?

Next thing they'll implement real multiple inheritance or something... ;-)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Collections

2007-06-22 Thread Andrew Coppin

apfelmus wrote:

:D :D
Finally someone who fully understands the true meaning of the prefix
unsafe  ;)
  


Personally, I'm loving the whole concept of this puppy:

 GHC.Prim.*reallyUnsafePtrEquality#

I have absolutely no idea what it does, but it must be something really 
unsafe! ;-)


*
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Odd lack of laziness

2007-06-22 Thread Stefan O'Rear
On Fri, Jun 22, 2007 at 07:14:39PM +0100, Andrew Coppin wrote:
 Chaddaï Fouché wrote:
 You should be using BS.null f rather than BS.length f  0.
 
 While we're on the subject... anybody know a neat way to check, say, 
 whether a list contains exactly 1 element? (Obviously pattern matching 
 can do it, but that requires big case-expressions...)

data LazyNat = Zero | Succ LazyNat  deriving(Eq,Ord)

instance Enum LazyNat where
succ = Succ
pred (Succ x) = x

toEnum 0 = Zero
toEnum (x+1) = succ (toEnum x)

fromEnum Zero = 0
fromEnum (Succ x) = fromEnum x + 1

instance Num LazyNat where -- this is a lie, the lifted naturals only
   -- form a *semi*ring.  Sigh.
fromIntegral = toEnum

Zero + y = y
Succ x + y = Succ (x + y)

Zero * y = 0
Succ x * y = y + x * y

abs = id
signum 0 = 0
signum _ = 1

x - Zero = x
Succ x - Succ y = x - y


length' [] = Zero
length' (x:xs) = Succ (length xs)

null x = length' x == 0

one x = length' x == 1

atLeastFive x = length' x = 5

Stefan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Collections

2007-06-22 Thread Andrew Coppin

Calvin Smith wrote:

Andrew Coppin wrote:
True enough - but that's a rather specific task. I'm still not seeing 
vast numbers of other uses for this...


You can see lots of applications for graphs at the following page:

http://www.graph-magics.com/practic_use.php


I see a pattern here - these are all the kinds of programs that I'm 
never likely to ever write. Maybe that's the cause? ;-)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Collections

2007-06-22 Thread Dan Piponi

Andrew said:


I see *trees* around the place a lot, but not general graphs.


A link discussing the application of graph theory for each of the
examples I gave. In each case the structure used is not a tree.

http://citeseer.ist.psu.edu/wiberg96codes.html
http://citeseer.ist.psu.edu/context/22137/0
http://en.wikipedia.org/wiki/Bayesian_network
http://www.scl.ameslab.gov/ctpsm07/
http://en.wikipedia.org/wiki/Neural_network
http://en.wikipedia.org/wiki/Dijkstra's_algorithm
http://links.jstor.org/sici?sici=0025-5572(198612)2%3A70%3A454%3C273%3ASBGPAG%3E2.0.CO%3B2-U
http://links.jstor.org/sici?sici=0025-570X(200106)74%3A3%3C234%3AAAOTML%3E2.0.CO%3B2-P
http://bears.ece.ucsb.edu/research-info/DP/dfg.html
--
Dan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A probably-stupid question about a Prelude implementation.

2007-06-22 Thread Dan Weston

This is how I think of it:

lazyIntMult :: Int - Int - Int
lazyIntMult 0 _ = 0
lazyIntMult _ 0 = 0
lazyIntMult a b = a * b

*Q 0 * (5 `div` 0)
*** Exception: divide by zero
*Q 0 `lazyIntMult` (5 `div` 0)
0

foldr evaluates a `f` (b `f` (c `f` ...))

Only f knows which arguments are strict and in which order to evaluate 
them. foldr knows nothing about evaluation order.


Dan

Michael T. Richter wrote:
So, I'm now going over the code in the 'Report with a fine-toothed comb 
because a) I'm actually able to read it now pretty fluently and b) I 
want to know what's there in detail for a project I'm starting.  I 
stumbled across this code:


any :: (a - Bool) - [a] - Bool
any p = or . map p

or :: [Bool] - Bool
or = foldr (||) False


Now I see how this works and it's all elegant and clear and all that.  
But I have two nagging problems with it (that are likely related):


   1. Using foldr means I'll be traversing the whole list no matter
  what.  This implies (perhaps for a good reason) that it can only
  work on a finite list.
   2. I don't see any early bale-out semantics.  The way I read this
  it's going to expand a whole list of n and perform n comparisons
  (including the one with the provided False). 



Considering that I only need a single True result to make the whole 
expression true, I'd have expected there to be some clever semantics to 
allow exactly this.  But what I'm seeing, unless I'm really misreading 
the code, is that if I give it a list of a million boolean expressions, 
it will cheerfully evaluate these million boolean expressions and 
perform a million calls to (||) before giving me a result.


Please tell me I'm wrong and that I'm missing something?

--
*Michael T. Richter* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] (*GoogleTalk:* [EMAIL PROTECTED])
/There are two ways of constructing a software design. One way is to 
make it so simple that there are obviously no deficiencies. And the 
other way is to make it so complicated that there are no obvious 
deficiencies. (Charles Hoare)/





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re[2]: [Haskell-cafe] Haskell version of ray tracer code is much slower than the original ML

2007-06-22 Thread Bulat Ziganshin
Hello Philip,

Friday, June 22, 2007, 7:36:51 PM, you wrote:
 Langauge File Time in seconds
 Haskell  ray.hs   38.2
 OCamlray.ml   23.8 
 g++-4.1  ray.cpp  12.6

can you share sourcecode of this variant? i'm interested to see how
much it is obfuscated

btw, *their* measurement said that ocaml is 7% faster :)

-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A probably-stupid question about a Prelude implementation.

2007-06-22 Thread Bulat Ziganshin
Hello Michael,

Friday, June 22, 2007, 7:31:17 PM, you wrote:

no surprise - you got a lot of answers :)  it is the best part of
Haskell, after all :)

the secret Haskell weapon is lazy evaluation which makes *everything*
short-circuited. just consider standard () definition:

() False _ = False
() True  x = x

this means that as far as first argument of () is False, we don't
even examine second one. and because everything is lazy evaluated,
this second argument passed as non-evaluated *expression*. if we never
examined it, it will be never evaluated:

Prelude True  (0 `div` 00)
*** Exception: divide by zero
Prelude False  (0 `div` 00)
False

in particular, this allows to create your own control structures. and
another example is that you found: infinite list may be processed as
far as you may calculate result using only finite part of list:

Prelude take 10 [1..]
[1,2,3,4,5,6,7,8,9,10]
Prelude and (cycle [True, False])
False

in particular, last example calculated as

True  False  ...

where ... remains uncalculated because we find final answer after
examining second list element. i suggest you to use textual
substitution to see how the last and call are translated into this
sequence of  applications. this substitution is real process of
evaluating haskell code, it is called graph reduction by scientists :)

-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Best idiom for avoiding Defaulting warnings with ghc -Wall -Werror ??

2007-06-22 Thread Dave Bayer

Hi all,

I've been going over my code trying to get it all to compile with  
ghc -Wall -Werror, without introducing constructs that would make  
my code the laughing stock of the dynamic typing community. They  
already think we're nuts; my daydreams are of a more computer  
literate society where Jessie Helms stands up in the U.S. Senate to  
read aloud my type declarations to the derisive laughter of the Ruby  
and Lisp parties.


There's a fine line between my opinion as to how GHC should issue  
warnings, and a legitimate bug report. I've already submitted a bug  
report for the need to declare the type of the wildcard pattern,  
because I believe that the case is clear. Here, I'm seeking guidance.  
Perhaps I just don't know the most elegant construct to use?


My sample code is this:



{-# OPTIONS_GHC -Wall -Werror #-}

module Main where

import Prelude hiding ((^))
import qualified Prelude ((^))

default (Int)

infixr 8 ^
(^) :: Num a = a - Int - a
x ^ n = x Prelude.^ n

main :: IO ()
main =
   let r = pi :: Double
   x = r ^ (3 :: Int)
   y = r ^ 3
   z = r Prelude.^ 3
   in  putStrLn $ show (x,y,z)



GHC issues a Warning: Defaulting the following constraint(s) to type  
`Int' for the definition of z.


The definition of y glides through, so a qualified import and  
redefinition of each ambiguous operator does provide a work-around,  
but the code is lame. (I could always encapsulate it in a module  
Qualude.)


If I import a module that I don't use, then ghc -Wall -Werror  
rightly complains. By analogy, if I use default (Int) to ask GHC to  
default to Int but the situation never arises, then GHC should  
rightly complain. Instead, if I use default (Int), GHC complains  
about defaulting anyways. In my opinion, this is a bug, but I'd like  
guidance before reporting it. Is there a more elegant way to handle  
the numeric type classes with ghc -Wall -Werror ?


No one is forced to use ghc -Wall -Werror, but it should be a  
practical choice.


I've enjoyed the recent typing discussions here. On one hand, there's  
little difference between using dynamic typing, and writing  
incomplete patterns in a strongly typed language. On the other hand,  
how is an incomplete pattern any different from code that potentially  
divides by zero? One quickly gets into decidability issues, dependent  
types, Turing-complete type systems.


My personal compromise is to use ghc -Wall -Werror, live with the  
consequences, and get back to work. Perhaps I'll get over it, but  
that's a slippery slope back to Lisp.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Best idiom for avoiding Defaulting warnings with ghc -Wall -Werror ??

2007-06-22 Thread David Roundy
On Fri, Jun 22, 2007 at 11:37:15AM -0700, Dave Bayer wrote:
 GHC issues a Warning: Defaulting the following constraint(s) to type  
 `Int' for the definition of z.

Why don't you just use -fno-warn-type-defaults? Warnings are just that:
warnings.  If you believe the defaulting matches what you want to do, then
you don't need the warning.

ghc -Werr -Wall is a often good idea, but if you prefer a different
programming style (e.g. no top-level type declarations required), ghc gives
you the flexibility to do that.
-- 
David Roundy
Department of Physics
Oregon State University
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell version of ray tracer code is much slower than the original ML

2007-06-22 Thread Philip Armstrong

On Fri, Jun 22, 2007 at 10:11:27PM +0400, Bulat Ziganshin wrote:

Friday, June 22, 2007, 7:36:51 PM, you wrote:

Langauge File Time in seconds
Haskell  ray.hs   38.2
OCamlray.ml   23.8 
g++-4.1  ray.cpp  12.6


can you share sourcecode of this variant? i'm interested to see how
much it is obfuscated


http://www.kantaka.co.uk/darcs/ray

The cpp  ml versions are precisely those available from the download
links on http://www.ffconsultancy.com/languages/ray_tracer/comparison.html

The optimisation options I used can be seen in the makefile.


btw, *their* measurement said that ocaml is 7% faster :)


Indeed. The gcc-4.0 compilied binary runs at about 15s IIRC, but it's
still much better than 7% faster than the ocaml binary.

cheers, Phil

--
http://www.kantaka.co.uk/ .oOo. public key: http://www.kantaka.co.uk/gpg.txt
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A probably-stupid question about a Prelude implementation.

2007-06-22 Thread Andrew Coppin

Chris Kuklewicz wrote:

Specifically it is graph reduced like this:

or [F,T,F,F...]

foldr (||) F [F,T,F,F...]

F || foldr (||) F [T,F,F...]

foldr (||) F [T,F,F...]

T || foldr (||) F [F,F...]

T

The last line is because (T || _ = T) and lazyness


I must sheepishly confess that I mistakenly throught that foldr would 
construct a chaint chain of ORs, which would then only be evaluated when 
it's returned.


Now I see why there's a strict version of foldl but *not* foldr... ;-)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Collections

2007-06-22 Thread Duncan Coutts
On Fri, 2007-06-22 at 15:34 +1000, Thomas Conway wrote:
 On 6/22/07, Duncan Coutts [EMAIL PROTECTED] wrote:
  You might find that lazy IO is helpful in this case. The primitive that
  implements lazy IO is unsafeInterleaveIO :: IO a - IO a
 
 Personally, unsafeInterleaveIO is so horribly evil, that even just
 having typed the name, I'll have to put the keyboard through the
 dishwasher (see http://www.coudal.com/keywasher.php). Also, I need to
 support concurrent querying and updates, and trying to manage the
 locking is quite hard enough as it is, without trying to keep track of
 which postings vectors have closures pointing to them!

Ah yes, fair enough. If you're doing updates at the same time then lazy
IO isn't appropriate as you need control over when the IO happens.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[4]: [Haskell-cafe] haskell crypto is reaaaaaaaaaally slow

2007-06-22 Thread Duncan Coutts
On Fri, 2007-06-22 at 10:52 +0400, Bulat Ziganshin wrote:

  i tried it once and found that ByteArray# size is returned rounded to 4 -
  there is no way in GHC runtime to alloc, say, exactly 37 bytes. and
  don't forget to add 2 unused bytes at average
 
  Right, GHC heap object are always aligned to the natural alignment of
  the architecture, be that 4 or 8 bytes.

 that i'm trying to say is that one need to store exact string size
 because value returned by getSizeOfByteArray is aligned to 4


Ah yes, you're quite right. To allow GHC's ByteArray# to be used to
implement a compact string type it'd have to be changed to store the
length in bytes rather than words.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Best idiom for avoiding Defaulting warnings with ghc -Wall -Werror ??

2007-06-22 Thread Dave Bayer

On Jun 22, 2007, at 11:42 AM, David Roundy wrote:


On Fri, Jun 22, 2007 at 11:37:15AM -0700, Dave Bayer wrote:

GHC issues a Warning: Defaulting the following constraint(s) to type
`Int' for the definition of z.


Why don't you just use -fno-warn-type-defaults?

...

ghc -Werr -Wall is a often good idea, but if you prefer a different
programming style (e.g. no top-level type declarations required),  
ghc gives

you the flexibility to do that.


To be precise, I __PREFER__ a ghc  -Wall -Werror programming style.  
In particular, I always want defaulting errors, because sometimes I  
miss the fact that numbers I can count on my fingers are defaulting  
to Integer.


Once I explicitly declare default (Int), I want ghc  -Wall - 
Werror to shut up, unless this defaulting rule never gets used.  
Instead, it complains anyways when the defaulting takes place that  
I've just declared I know about. In other words, I want warnings  
involving default to follow the same logic currently used for  
warnings involving import.


This is a bug. I want ghc  -Wall -Werror to be a practical choice,  
left on all the time, and in my example I had to work too hard to  
avoid the warning. Other people just wouldn't use ghc  -Wall - 
Werror, the way some people won't use seat belts, and the way some  
people view any strongly typed language as a cumbersome seat belt. If  
we tolerate ridiculously arcane syntax to handle these situations, we  
fully deserve to be marginalized while Ruby takes over the world.


In other words, I'm disputing that the top-level declarations are in  
fact required. GHC can be trivially modified to allow Haskell to  
handle this situation far more elegantly.


(It is amusing the sides we're taking on this, and the stereotype  
that physicists compute faster than mathematicians because they don't  
worry about convergence issues. Effectively, the stereotype holds  
that mathematicians think with -Wall -Werror on, and physicists  
don't. Perhaps it's true?)



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A probably-stupid question about a Prelude implementation.

2007-06-22 Thread Andrew Coppin

Bulat Ziganshin wrote:

no surprise - you got a lot of answers :)  it is the best part of
Haskell, after all :)
  


Only if you ask easy questions. ;-)


the secret Haskell weapon is lazy evaluation which makes *everything*
short-circuited. just consider standard () definition:
  


Again, only if you Do It Right(tm). But fortunately, that's not usually 
difficult... :-D


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Lambdabot

2007-06-22 Thread Derek Elkins
On Fri, 2007-06-22 at 20:10 +0200, Daniel Fischer wrote:

[blah blah blah]

 Finally, is there a tutorial/manual for using lambdabot?

Um... #haskell...

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] directory tree?

2007-06-22 Thread Chad Scherrer

Haskell is great at manipulating tree structures, but I can't seem to
find anything representing a directory tree. A simple representation
would be something like this:

data Dir = Dir {dirName :: String, subDirectories :: [Dir], files :: [File]}
data File = File {fileName :: String, fileSize :: Int}

Maybe these would need to be parametrized to allow a function
splitting files by extension or that kind of thing. Anyway, the whole
idea would be to abstract as much of the file stuff out of the IO
monad as possible.

I haven't used the Scrap Your Boilerplate stuff yet, but it seems
like that could fit in here naturally to traverse a Dir and make
changes at specified points.

The only problem I can see (so far) with the approach is that it might
make big changes to the directory tree too easy to make. I'm not
sure immediately how to deal with that, or if the answer is just to
post a be careful disclaimer.

So, what do you think? Do you know of any work in this direction? Is
there a way to make dangerous 1-liners safe? Is there a fundamental
flaw with the approach I'm missing?

Thanks much,

Chad
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Need for speed: the Burrows-Wheeler Transform

2007-06-22 Thread Andrew Coppin
OK, so I *started* writing a request for help, but... well I answered my 
own question! See the bottom...





I was reading Wikipedia, and I found this:

 http://en.wikipedia.org/wiki/Burrows-Wheeler_transform

I decided to sit down and see what that looks like in Haskell. I came up 
with this:




module BWT where

import Data.List

rotate1 :: [x] - [x]
rotate1 [] = []
rotate1 xs = last xs : init xs

rotate :: [x] - [[x]]
rotate xs = take (length xs) (iterate rotate1 xs)

bwt :: (Ord x) = [x] - [x]
bwt = map last . sort . rotate


step :: (Ord x) = [x] - [[x]] - [[x]]
step xs = zipWith (:) xs . sort

inv_bwt :: (Ord x) = x - [x] - [x]
inv_bwt mark xs =
 head . filter (\xs - head xs == mark) .
 head . drop ((length xs) - 1) . iterate (step xs) .
 map (\x - [x]) $ xs



My my, isn't that SO much shorter? I love Haskell! :-D

Unfortunately, the resident C++ expert fails to grasp the concept of 
example code, and insists on comparing the efficiency of this program 
to the C one on the website.


Fact is, he's translated the presented C into C++, and it can apparently 
transform a 145 KB file in 8 seconds using only 3 MB of RAM. The code 
above, however, took about 11 seconds to transform 4 KB of text, and 
that required about 60 MB of RAM. (I tried larger, but the OS killed the 
process for comsuming too much RAM.)


Well anyway, the code was written for simplicity, not efficiency. I've 
tried to explain this, but apparently that is beyond his comprehension. 
So anyway, it looks like we have a race on. :-D


The first thing I did was the optimisation mentioned on Wikipedia: you 
don't *need* to build a list of lists. You can just throw pointers 
around. So I arrived at this:




module BWT2 (bwt) where

import Data.List

rotate :: Int - [x] - Int - [x]
rotate l xs n = (drop (l-n) xs) ++ (take (l-n) xs)

bwt xs =
 let l  = length xs
 ys = rotate l xs
 in  map (last . rotate l xs) $
 sortBy (\n m - compare (ys n) (ys m)) [0..(l-1)]


This is indeed *much* faster. With this, I can transform 52 KB of text 
in 9 minutes + 60 MB RAM. The previous version seemed to have quadratic 
memory usage, whereas this one seems to be linear. 52 KB would have 
taken many months with the first version!


Still, 9 minutes (for a file 3 times smaller) is nowhere near 8 seconds. 
So we must try harder... For my next trick, ByteStrings! (Never used 
them before BTW... this is my first try!)



module BWT3 (bwt) where

import Data.List
import qualified Data.ByteString as Raw

rotate :: Int - Raw.ByteString - Int - Raw.ByteString
rotate l xs n = (Raw.drop (l-n) xs) `Raw.append` (Raw.take (l-n) xs)

bwt xs =
 let l  = Raw.length xs
 ys = rotate l xs
 in  Raw.pack $
 map (Raw.last . rotate l xs) $
 sortBy (\n m - compare (ys n) (ys m)) [0..(l-1)]


Now I can transform 52 KB in 54 seconds + 30 MB RAM. Still nowhere near 
C++, but a big improvement none the less.


Woah... What the hell? I just switched to Data.ByteString.Lazy and WHAM! 
Vast speed increases... Jeepers, I can transform 52 KB so fast I can't 
even get to Task Manager fast enough to *check* the RAM usage! Blimey...


OK, just tried the 145 KB test file that Mr C++ used. That took 2 
seconds + 43 MB RAM. Ouch.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A probably-stupid question about a Prelude implementation.

2007-06-22 Thread Jan-Willem Maessen


On Jun 22, 2007, at 2:30 PM, Dan Weston wrote:


This is how I think of it:

lazyIntMult :: Int - Int - Int
lazyIntMult 0 _ = 0
lazyIntMult _ 0 = 0
lazyIntMult a b = a * b

*Q 0 * (5 `div` 0)
*** Exception: divide by zero
*Q 0 `lazyIntMult` (5 `div` 0)
0

foldr evaluates a `f` (b `f` (c `f` ...))

Only f knows which arguments are strict and in which order to  
evaluate them. foldr knows nothing about evaluation order.


And, indeed, if you foldr a function with left zeroes, and you check  
for them explicitly as lazyIntMult and (||) do, then foldr is  
guaranteed to terminate early if it finds a zero.


z is a left zero of op if for all x, z `op` x = z.

This isn't the only time foldr will terminate early, but it is an  
important one.


-Jan-Willem Maessen



Dan




smime.p7s
Description: S/MIME cryptographic signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Lambdabot

2007-06-22 Thread Daniel Fischer
I can partially answer my questions.
Removing also Seen does away with the ByteString.index error.
Must check the code to see why.

Two more concrete questions
a) how do I gracefully leave lambdabot?
ctrl-C or killing it from another shell are the only ways out I found so far.
b) what does lambdabot expect in the fptools directory?

Cheers,
Daniel

Am Freitag, 22. Juni 2007 20:10 schrieb ich:
 Greetings,
 lambdabot segfaulted when installing
 Fact
 Haddock
 Quote
 Source
 Todo
 Where

 What's special about them?
 I.e., why did they cause a segfault and the others not?

 And, how could I build a lambdabot _with_ them (though I'm not sure, I'll
 actually want them, but I might).

 Without these, I now have an apparently working lambdabot, well, not
 properly working.
 First time I started it, all seemed well, but from then on:
 $ ./lambdabot
 Initialising plugins ..sending message to
 bogus server: IrcMessage {msgServer = freenode, msgLBName =
 urk!outputmessage, msgPrefix = , msgCommand = NAMES, msgParams =
 []}
 ... done.
 Main: caught (and ignoring) IRCRaised Data.ByteString.index: index too
 large: 0, length = 0
 lambdabot  3+7
 Main: caught (and ignoring) IRCRaised Data.ByteString.index: index too
 large: 0, length = 0
  10
 lambdabot

 The bogus message thing also appeared the first time, I hope this is meant
 to be so.
 But what about the ByteString.index exception?
 Where might that come from?
 How to get rid of it?

 Finally, is there a tutorial/manual for using lambdabot?

 Thanks for any help,
 Daniel

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Lambdabot

2007-06-22 Thread Stefan O'Rear
On Fri, Jun 22, 2007 at 10:37:55PM +0200, Daniel Fischer wrote:
 I can partially answer my questions.
 Removing also Seen does away with the ByteString.index error.
 Must check the code to see why.
 
 Two more concrete questions
 a) how do I gracefully leave lambdabot?
 ctrl-C or killing it from another shell are the only ways out I found so far.

quit

 b) what does lambdabot expect in the fptools directory?

Absolutely nothing.  It's dead code.

Stefan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] directory tree?

2007-06-22 Thread Jeremy Shaw
Hello,

Have you seen Tom Moertel's series on directory-tree printing in Haskell ?

http://blog.moertel.com/articles/2007/03/28/directory-tree-printing-in-haskell-part-three-lazy-i-o

j.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] directory tree?

2007-06-22 Thread David Roundy
On Fri, Jun 22, 2007 at 01:19:01PM -0700, Chad Scherrer wrote:
 Haskell is great at manipulating tree structures, but I can't seem to
 find anything representing a directory tree. A simple representation
 would be something like this:
 
 data Dir = Dir {dirName :: String, subDirectories :: [Dir], files :: [File]}
 data File = File {fileName :: String, fileSize :: Int}
 
 Maybe these would need to be parametrized to allow a function
 splitting files by extension or that kind of thing. Anyway, the whole
 idea would be to abstract as much of the file stuff out of the IO
 monad as possible.
 
 I haven't used the Scrap Your Boilerplate stuff yet, but it seems
 like that could fit in here naturally to traverse a Dir and make
 changes at specified points.
 
 The only problem I can see (so far) with the approach is that it might
 make big changes to the directory tree too easy to make. I'm not
 sure immediately how to deal with that, or if the answer is just to
 post a be careful disclaimer.
 
 So, what do you think? Do you know of any work in this direction? Is
 there a way to make dangerous 1-liners safe? Is there a fundamental
 flaw with the approach I'm missing?

Darcs does this sort of thing (see SlurpDirectory.lhs in the source code),
which can be pretty nice, but you really only want to use it for read-only
purposes.  Early versions of darcs made modifications directly to Slurpies
(these directory tree data structures), which kept track of what changes
had been made, and then there was a write modifications IO functions that
actually made the changes.  But this was terribly fragile, since ordering
of changes had to be kept track of, etc.  The theory was nice, that we'd be
able to make the actual changes all at once (only write once to each file,
for example, each file had a dirty bit), but in practice we kept running
into trouble.  So now we just use this for read-only purposes, for which it
works fine, although it still can be scary, if users modify the directories
while we're looking at them (but this is scary regardless...).

Nowadays we've got a (moderately) nice little monad DarcsIO, which allows
us to do file/directory IO operations on either Slurpies or disk, or
various other sorts of virtualized objects.  Someday I'd like to write an
industrial strength version of this monad (which addresses more than just
darcs' needs).
-- 
David Roundy
Department of Physics
Oregon State University
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   >