Re: [Haskell-cafe] monadic DSL for compile-time parser generator, not possible?

2013-03-13 Thread Josef Svenningsson
Jeremy,

The problem you're trying to solve might seem tricky but it is in fact
quite solvable. In Feldspar[1] we use monads quite frequently and generate
code from them, in a similar fashion to what you're trying to do. We've
written a paper about how we do it[2] that I welcome you to read. If you
have any questions regarding the paper I'd be happy to try to answer them.
There are two parts to the trick. One is to use the continuation monad to
get a monad instance. The other trick is to restrict any functions you have
in your data type (like FMap in your example) so that they can be reified
into something that can be compiled, which would be Template Haskell in
your case.

To help you along the way I've prepared some code to give you an idea of
how this can be done. This code only shows the continuation monad trick but
I hope it's useful nonetheless.

\begin{code}
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
{-# LANGUAGE RankNTypes #-}
{-# LANGUAGE GADTs #-}
module MonadReify where

newtype Cont r a = C { unC :: (a - r) - r }

instance Monad (Cont r) where
  return a = C $ \k - k a
  m = f  = C $ \k - unC m (\a - unC (f a) k)

data ParserSpec a where
  AnyChar :: ParserSpec Char
  Return  :: a - ParserSpec a
  Join:: ParserSpec (ParserSpec a) - ParserSpec a
  FMap:: (a - b) - ParserSpec a - ParserSpec b

bindSpec p f = Join (FMap f p)

newtype Parser a = P { unP :: forall r. Cont (ParserSpec r) a }

instance Monad Parser where
  return a = P (return a)
  m = f  = P (unP m = \a - unP (f a))

anyChar :: Parser Char
anyChar = P (C $ \k - bindSpec AnyChar k)

reifyParser :: Parser a - (forall r. ParserSpec r - b) - b
reifyParser (P (C f)) g = g (f (\a - Return a))
\end{code}

Cheers,

Josef

[1]https://github.com/Feldspar/feldspar-language
[2]http://www.cse.chalmers.se/~josefs/publications/paper21_cameraready.pdf

On Tue, Mar 12, 2013 at 9:06 PM, Jeremy Shaw jer...@n-heptane.com wrote:

 It would be pretty damn cool if you could create a data type for
 generically describing a monadic parser, and then use template haskell
 to generate a concrete parser from that data type. That would allow
 you to create your specification in a generic way and then target
 different parsers like parsec, attoparsec, etc. There is some code
 coming up in a few paragraphs that should describe this idea more
 clearly.

 I would like to suggest that while it would be cool, it is
 impossible. As proof, I will attempt to create a simple monadic parser
 that has only one combinator:

 anyChar :: ParserSpec Char

 We need the GADTs extension and some imports:

  {-# LANGUAGE GADTs, TemplateHaskell #-}
  import Control.Monad  (join)
  import qualified Text.Parsec.Char as P
  import Language.Haskell.TH(ExpQ, appE)
  import Language.Haskell.TH.Syntax (Lift(lift))
  import Text.Parsec(parseTest)
  import qualified Text.Parsec.Char as P
  import Text.Parsec.String (Parser)

 Next we define a type that has a constructor for each of the different
 combinators we want to support, plus constructors for the functor and
 monad methods:

  data ParserSpec a where
  AnyChar :: ParserSpec Char
  Return  :: a - ParserSpec a
  Join:: ParserSpec (ParserSpec a) - ParserSpec a
  FMap:: (a - b) - ParserSpec a - ParserSpec b
 
  instance Lift (ParserSpec a) where
  lift _ = error not defined because we are screwed later anyway.

 In theory, we would extend that type with things like `Many`, `Some`,
 `Choice`, etc.

 In Haskell, we are used to seeing a `Monad` defined in terms of
 `return` and `=`. But, we can also define a monad in terms of
 `fmap`, `return` and `join`. We will do that in `ParserSpec`, because
 it makes the fatal flaw more obvious.

 Now we can define the `Functor` and `Monad` instances:

  instance Functor ParserSpec where
  fmap f p = FMap f p

  instance Monad ParserSpec where
  return a = Return a
  m = f  = Join ((FMap f) m)

 and the `anyChar` combinator:

  anyChar :: ParserSpec Char
  anyChar = AnyChar

 And now we can define a simple parser that parses two characters and
 returns them:

  charPair :: ParserSpec (Char, Char)
  charPair =
  do a - anyChar
 b - anyChar
 return (a, b)

 Now, we just need to define a template haskell function that generates
 a `Parser` from a `ParserSpec`:

  genParsec :: (Lift a) = ParserSpec a - ExpQ
  genParsec AnyChar= [| anyChar |]
  genParsec (Return a) = [| return a |]
  genParsec (Join p)   = genParsec p
  -- genParsec (FMap f p) = appE [| f |] (genParsec p) -- uh-oh

 Looking at the `FMap` case we see the fatal flaw. In order to
 generate the parser we would need some way to transform any arbitrary
 Haskell function of type `a - b` into Template Haskell. Obviously,
 that is impossible (for some definition of obvious).

 Therefore, we can assume that it is not possible to use Template
 Haskell to generate a monadic parser from a monadic specification.

 We can also assume 

[Haskell-cafe] Problems with code blocks in the description field in a .cabal file

2013-02-04 Thread Josef Svenningsson
Hi,

I'm putting together a cabal package and I'd like to have some code
examples in my description file. In particular I would like to have a code
block containing markdown containing a code block of Haskell, like this:

 ~~~{ .haskell }
 module Main where

 main = putStrLn Hello World!
 ~~~

When I put the above code in my .cabal file and do `cabal haddock
--executables` I get the following error:

haddock: failed to parse haddock prologue from file:
dist/doc/html/codeExtract/codeExtract/haddock-prolog31969.txt

In general I can provoke the parse error from haddock whenever I have
something inside curly braces.

So the error seems to stem from haddock. I've tried to track down what
happens inside haddock but I've run out steam. I'd like to know if there is
anything I can do to be able to write something like the above as part of
the description in my .cabal file.

Thanks,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problems with code blocks in the description field in a .cabal file

2013-02-04 Thread Josef Svenningsson
On Mon, Feb 4, 2013 at 1:26 PM, Mateusz Kowalczyk
fuuze...@fuuzetsu.co.ukwrote:

 I don't understand why you're putting it in your .cabal file. Isn't
 something like 3.8.5 over at [1] what you're trying to achieve?

 Right, I probably should have mentioned that the reason I put it in the
.cabal file is that the cabal package I'm putting together is an
executable. So I want the documentation to show up on the index page.


 ...

 I had a look at a package ([2]) that I know uses a multi-line code block
 example. Here's what I found in its cabal file:
 An example:
 .
  runShpider $ do
   download http://apage.com;
   theForm : _ - getFormsByAction http://anotherpage.com;
   sendForm $ fillOutForm theForm $ pairs $ do
 occupation =: unemployed Haskell programmer
 location =: mother's house
 .

 Depending on your mail client, the `' signs might get quoted.

 Yes. I have no problems including code in general. It's when I have things
inside curly braces that I get a parse error. And that's exactly what I
would like to have.

Thanks,

Josef



 [1] - http://www.haskell.org/haddock/doc/html/ch03s08.html
 [2] -

 http://hackage.haskell.org/packages/archive/shpider/0.2.1.1/doc/html/Network-Shpider.html

 On 04/02/13 12:30, Josef Svenningsson wrote:
  Hi,
 
  I'm putting together a cabal package and I'd like to have some code
  examples in my description file. In particular I would like to have a
  code block containing markdown containing a code block of Haskell, like
  this:
 
  ~~~{ .haskell }
  module Main where
 
  main = putStrLn Hello World!
  ~~~
 
  When I put the above code in my .cabal file and do `cabal haddock
  --executables` I get the following error:
 
  haddock: failed to parse haddock prologue from file:
  dist/doc/html/codeExtract/codeExtract/haddock-prolog31969.txt
 
  In general I can provoke the parse error from haddock whenever I have
  something inside curly braces.
 
  So the error seems to stem from haddock. I've tried to track down what
  happens inside haddock but I've run out steam. I'd like to know if there
  is anything I can do to be able to write something like the above as
  part of the description in my .cabal file.
 
  Thanks,
 
  Josef
 
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problems with code blocks in the description field in a .cabal file

2013-02-04 Thread Josef Svenningsson
Hi Ryan,

As far as I can tell I'm following the Haddock formatting just fine. I'm
using bird tracks for my code block and according to the Haddock manual
those code blocks are interpreted literally without any additional markup.
 To me that suggests that I should be able to write just about anything in
these code blocks. But that's evidently not so.

The documentation for the lens package is indeed impressive. But it doesn't
help me with my particular conundrum of the curly braces.

Thanks,

Josef

On Mon, Feb 4, 2013 at 2:01 PM, Ryan Yates fryguy...@gmail.com wrote:

 Hi Josef,

 You should be fine if you follow Haddock formatting.  For example:

 http://hackage.haskell.org/package/lens

 Is from the cabal file:

 http://hackage.haskell.org/packages/archive/lens/3.8.5/lens.cabal


 Ryan


 On Mon, Feb 4, 2013 at 7:30 AM, Josef Svenningsson 
 josef.svennings...@gmail.com wrote:

 Hi,

 I'm putting together a cabal package and I'd like to have some code
 examples in my description file. In particular I would like to have a code
 block containing markdown containing a code block of Haskell, like this:

  ~~~{ .haskell }
  module Main where
 
  main = putStrLn Hello World!
  ~~~

 When I put the above code in my .cabal file and do `cabal haddock
 --executables` I get the following error:

 haddock: failed to parse haddock prologue from file:
 dist/doc/html/codeExtract/codeExtract/haddock-prolog31969.txt

 In general I can provoke the parse error from haddock whenever I have
 something inside curly braces.

 So the error seems to stem from haddock. I've tried to track down what
 happens inside haddock but I've run out steam. I'd like to know if there is
 anything I can do to be able to write something like the above as part of
 the description in my .cabal file.

 Thanks,

 Josef

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problems with code blocks in the description field in a .cabal file

2013-02-04 Thread Josef Svenningsson
On Mon, Feb 4, 2013 at 5:10 PM, Ryan Yates fryguy...@gmail.com wrote:

 Hi Josef,

 Sorry, I misunderstood the intent.  From what I can tell this is a
 Cabal deficiency.  The text from the description field first passes through
 the `tokeniseLine` function from here:
 https://github.com/haskell/cabal/blob/cabal-1.16/Cabal/Distribution/ParseUtils.hs#L428which
  seems to have no provision for escaping.

 Indeed. Bummer. And I notice now that there's already a bug filed in the
tracker about this:
https://github.com/haskell/cabal/issues/885

I guess I'll have to do without the example in my documentation for now.

Thank you very much!

Josef


 Ryan


 On Mon, Feb 4, 2013 at 8:37 AM, Josef Svenningsson 
 josef.svennings...@gmail.com wrote:

 Hi Ryan,

 As far as I can tell I'm following the Haddock formatting just fine. I'm
 using bird tracks for my code block and according to the Haddock manual
 those code blocks are interpreted literally without any additional markup.
  To me that suggests that I should be able to write just about anything in
 these code blocks. But that's evidently not so.

 The documentation for the lens package is indeed impressive. But it
 doesn't help me with my particular conundrum of the curly braces.

 Thanks,

 Josef

 On Mon, Feb 4, 2013 at 2:01 PM, Ryan Yates fryguy...@gmail.com wrote:

 Hi Josef,

 You should be fine if you follow Haddock formatting.  For example:

 http://hackage.haskell.org/package/lens

 Is from the cabal file:

 http://hackage.haskell.org/packages/archive/lens/3.8.5/lens.cabal


 Ryan


 On Mon, Feb 4, 2013 at 7:30 AM, Josef Svenningsson 
 josef.svennings...@gmail.com wrote:

 Hi,

 I'm putting together a cabal package and I'd like to have some code
 examples in my description file. In particular I would like to have a code
 block containing markdown containing a code block of Haskell, like this:

  ~~~{ .haskell }
  module Main where
 
  main = putStrLn Hello World!
  ~~~

 When I put the above code in my .cabal file and do `cabal haddock
 --executables` I get the following error:

 haddock: failed to parse haddock prologue from file:
 dist/doc/html/codeExtract/codeExtract/haddock-prolog31969.txt

 In general I can provoke the parse error from haddock whenever I have
 something inside curly braces.

 So the error seems to stem from haddock. I've tried to track down what
 happens inside haddock but I've run out steam. I'd like to know if there is
 anything I can do to be able to write something like the above as part of
 the description in my .cabal file.

 Thanks,

 Josef

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] smt solver bindings

2011-12-15 Thread Josef Svenningsson
On Thu, Dec 15, 2011 at 7:04 PM, Dimitrios Vytiniotis 
dimit...@microsoft.com wrote:


 I've a quick question:

 Are there Haskell wrappers for the Z3 C API around?

 I believe sbv recently got support for Z3 but I don't know if it uses the
C API. Neither have I tried the Z3 backend, I only played with the Yices
backend. If you contact Levent Erkök, the author of sbv, he should be able
to give you more information.

 https://github.com/LeventErkok/sbv

Thanks,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Mysterious complaint about .hi files

2011-06-07 Thread Josef Svenningsson
Hi cafe!

I'm hitting a very strange problem when using haskell-src-exts and
haskell-src-exts-qq. Consider the following module:

\begin{code}
{-# Language QuasiQuotes #-}
module TestBug where

import Language.Haskell.Exts
import Language.Haskell.Exts.QQ

unit = TyTuple Boxed []

ty = [dec| quux :: (a,b) |]
\end{code}

This module doesn't load for me using ghc 7.0.3. I've pasted the full error
message at the end of this email but the error message begins with the
following lines:

TestBug.hs:11:11:
Can't find interface-file declaration for variable
Language.Haskell.Exts.Syntax.Boxed
  Probable cause: bug in .hi-boot file, or inconsistent .hi file
  Use -ddump-if-trace to get an idea of which file caused the error

Using -ddump-if-trace didn't help me much.

The funny thing is that if I comment out the last line (the definition of
'ty') then the module loads just fine even though it uses the Boxed type in
the definition of 'unit'. So the problem only manifests itself when I use
tuples from haskell-src-exts-qq. Everything else that I've used from
haskell-src-exts-qq works fine, it's just when I try to use tuples that
things go haywire.

I've tried to remove the packages and reinstall them but it didn't help.

Any clues?

Josef

TestBug.hs:11:11:
Can't find interface-file declaration for variable
Language.Haskell.Exts.Syntax.Boxed
  Probable cause: bug in .hi-boot file, or inconsistent .hi file
  Use -ddump-if-trace to get an idea of which file caused the error
In the first argument of `Language.Haskell.Exts.Syntax.TyTuple', namely
  `Language.Haskell.Exts.Syntax.Boxed'
In the third argument of `Language.Haskell.Exts.Syntax.TypeSig', namely
  `Language.Haskell.Exts.Syntax.TyTuple
 Language.Haskell.Exts.Syntax.Boxed
 ((:)
(Language.Haskell.Exts.Syntax.TyVar
   (Language.Haskell.Exts.Syntax.Ident ((:) 'a' [])))
((:)
   (Language.Haskell.Exts.Syntax.TyVar
  (Language.Haskell.Exts.Syntax.Ident ((:) 'b' [])))
   []))'
In the expression:
  Language.Haskell.Exts.Syntax.TypeSig
(SrcLoc
   ((:)
  ''
  ((:)
 'u'
 ((:)
'n'
((:)
   'k'
   ((:)
  'n'
  ((:)
 'o'
 ((:)
'w' ((:) 'n' ((:) '' ((:) '.' ((:) 'h' ((:)
's' []
   1
   2)
((:)
   (Language.Haskell.Exts.Syntax.Ident
  ((:) 'q' ((:) 'u' ((:) 'u' ((:) 'x' [])
   [])
(Language.Haskell.Exts.Syntax.TyTuple
   Language.Haskell.Exts.Syntax.Boxed
   ((:)
  (Language.Haskell.Exts.Syntax.TyVar
 (Language.Haskell.Exts.Syntax.Ident ((:) 'a' [])))
  ((:)
 (Language.Haskell.Exts.Syntax.TyVar
(Language.Haskell.Exts.Syntax.Ident ((:) 'b' [])))
 [])))
Failed, modules loaded: none.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Wanted: composoable parsers from haskell-src-exts

2011-03-18 Thread Josef Svenningsson
On Thu, Mar 17, 2011 at 10:58 PM, Niklas Broberg
niklas.brob...@gmail.comwrote:

I already export a partial parser for top-of-file pragmas,
 
  I see. What I don't see is how such a parser would return the rest of
 input.

 Hmm. I see. And I see that you are correct in not seeing it, since it
 appears it cannot be done with Happy, which I believed. It would then be
 down to the parser/lexer to pass on unconsumed input, which seems a daunting
 and disruptive task, seeing how the lexer typically would tokenize some
 input in advance before the parser discovers that it cannot consume what has
 been lexed... Hmm.


I think this is actually doable, although not necessarily easy, using a
technique due to Oleg. He has used delimited continuations to take ordinary
parsers and make them incremental. Dan Doel has experimented with applying
the technique to Parsec parsers with some success. I think choosing the
right parser monad in Happy can make this work.

Reference to Oleg's technique:
http://okmij.org/ftp/continuations/differentiating-parsers.html

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Safer common subexpression elimination

2010-11-26 Thread Josef Svenningsson
On Thu, Nov 25, 2010 at 11:32 AM, Joachim Breitner m...@joachim-breitner.de
 wrote:

 Hi,

 although semantically it could, ghc does not do common subexpression
 elimination (CSE), at least not in every case. The canonical example
 where it would be bad is the function (with div to avoid numerical
 casting):

  avg :: [Int] - Int
  avg n = (sum [0..n] `div` length [0..n])

 The reason why [0..n] is not shared is because then it would be kept in
 memory completely after the evaluation of sum, because length will still
 need it, possibly overflowing the memory. In the non-shared form,
 though, the garbage collector can clean up directly behind sum and
 length, and the program runs in constant space.

 Note that the shared expression would be very large compared to the
 thunks.

 Now consider the program:

  avg' :: [Int] - (Int, Int)
  avg' n = (sum [0..n] `div` length [0..n], length [0..n])

 I’d say that CSE to
  avg n = (sum [0..n] `div` l, l)
where l = length [0..n]
 is safe, because the shared expression (an Int) is smaller than the
 thunks and the compiler can tell.

 So I wonder:
  * Is sharing values of type Int (and Bool and similar small values)
 always safe?
  * If so: does GHC already do that?
  * Would it be technically possible?
  * Is there an established theory that can tell, for a sharing
 candidate, whether it is safe to share it?

 I'm just going to answer your last question.

Jörgen Gustavsson and David Sands developed the theory of space improvement
for call-by-need. That can help you answer the other questions you have. But
that being said, it's fairly complicated stuff, and it's not a very easy to
use theory. I think it's inherent in the problem though, the
space behavior of lazy programs is just weird. If you're curious to read
about space improvement see papers [GS01] and [GS99] on the following page:
http://www.cse.chalmers.se/~dave/davewww.html

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiling a DSL on the shoulders of GHC

2010-10-18 Thread Josef Svenningsson
Fiddling with GHC internals sounds like overkill for this project.

Are you really sure you need a timeout to run the Haskell metaprogram? There
are many implementations of EDSLs which take the approach that you want to
take by using Haskell to create a syntax tree and the offshore it to some
backend compiler. None of them uses a timeout.

But in case you really insist on a timeout I would recommend using a wrapper
function on the toplevel of your metaprogram which implements the timeout.
That way you don't have to risk your sanity by having to dig around in GHC.

Josef

On Sun, Oct 17, 2010 at 9:53 PM, Patai Gergely patai_gerg...@fastmail.fmwrote:

  Not sure how this fits into what I thought you were saying.  Are you
  trying to use Haskell to build an AST, use GHC to optimize it, and
  then spit it out and compile it with, say, a OCaml program that you
  have already written?
 Yes, that would be the basic idea:

 1. Compile the Haskell metaprogram.
 2. Evaluate main, possibly with a timeout, in a way that keeps all its
 structure including lambdas accessible (e.g. Core).
 3. Compile the resulting program with other tools.

  What is this different tool and how does it fit in to your pipeline?
 This tool(set) is a specialised compiler for some low-level target
 platform (FPGA, DSP, GPU - again, no clear decision yet), and it is the
 second half of the pipeline after the GHC phases.

 Gergely

 --
 http://www.fastmail.fm - The professional email service

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Higher-order algorithms

2010-08-24 Thread Josef Svenningsson
On Mon, Aug 23, 2010 at 6:10 PM, Max Rabkin max.rab...@gmail.com wrote:

 (Accidentally sent off-list, resending)

 On Mon, Aug 23, 2010 at 15:03, Eugene Kirpichov ekirpic...@gmail.com
 wrote:
  * Difference lists

  I mean that not only higher-order facilities are used, but the essence
  of the algorithm is some non-trivial higher-order manipulation.

 If I'm not mistaken, we can defunctionalize difference lists like this:

 data DList a = Chunk [a] | Concat (DList a) (DList a)

 fromList = Chunk
 () = Concat
 singleton = Chunk . (:[])
 empty = Chunk []

 toList dl = dl `fold` []
  where
infixr `fold`
fold :: DList a - [a] - [a]
fold (Concat l r) ys = l `fold` r `fold` ys
fold (Chunk xs) ys = xs ++ ys

 (This implementation has only been lightly tested)

 And of course, we knew this was possible, since we can compile DLists
 to first-order machines.

 I agree that the functional, higher-order presentation is clear and
 elegant. But is it essential?

 It's true that any higher-order program can be defunctionalized (or
closure-converted) to a first order program. But defunctionalization is a
whole program transformation and in general we might lose compositionality
when applying it to a library. In your case above with difference lists
there is no change in the interface since it is first order. But if you try
to defunctionalize a monad then you will have to defunctionalize the second
argument to the bind function and all of a sudden you cannot use the bind
function as freely as before.


 Also, I'm curious about how this performs relative to the functional
 version.

 In my small experiments with defunctionalization there is not much
difference between a higher order program and its defunctionalized version.
I used GHC in those experiments.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] feasability of implementing an awk interpreter.

2010-08-20 Thread Josef Svenningsson
On Fri, Aug 20, 2010 at 6:05 AM, Jason Dagit da...@codersbase.com wrote:



 On Thu, Aug 19, 2010 at 8:05 PM, Michael Litchard mich...@schmong.orgwrote:

 I'd like the community to give me feedback on the difficulty level of
 implementing an awk interpreter. What language features would be
 required? Specifically I'm hoping that TH is not necessary because I'm
 nowhere near that skill level.


 Implementing an awk interpreter in Haskell can be a fun project. I have a
half finished implementation lying around on the hard drive. It's perfectly
possible to implement it without using any super fancy language features.
But as other people have pointed out, monads are helpful for dealing with a
lot of the plumbing in the interpreter.

An outline of a possible approach would be appreciated. I am using
 http://www.math.utah.edu/docs/info/gawk_toc.html
 as a guide to the language description.


 You might also focus on the 'core' of awk.  Think about, what is the
 minimal language and start from there.  Grow your implementation adding
 features bit by bit.  It's also a good opportunity to do testing.  You have
 a reference implementation and so you can write lots of tests for each
 feature as you add them.

 When I wrote my awk interpreter I decided to go for the whole language from
start. I had reasons for doing this as there were certain aspects of this
that I wanted to capture but it is not they way I would recommend going
about it. I definitely second Jason's advice at trying to capture the core
functionality first.

Have fun,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] derivable type classes

2010-03-23 Thread Josef Svenningsson
On Tue, Mar 23, 2010 at 11:52 AM, Ozgur Akgun ozgurak...@gmail.com wrote:
 Can a user define a derivable type class of her own? If yes, how?

GHC has a feature which lets you define classes such that making an
instance of them is as easy as deriving. It's called Generic classes.
See GHC's documentation for the details:
http://www.haskell.org/ghc/docs/latest/html/users_guide/generic-classes.html

Hth,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ** for nested applicative functors?

2009-10-12 Thread Josef Svenningsson
On Mon, Oct 12, 2009 at 6:22 PM, Kim-Ee Yeoh a.biurvo...@asuhan.com wrote:

 Does anyone know if it's possible to write the following:

 ** :: (Applicative m, Applicative n) =
 m (n (a-b)) - m (n a) - m (n b)

 Clearly, if m and n were monads, it would be trivial.

 Rereading the original paper, I didn't see much discussion
 about such nested app. functors.

 Any help appreciated.

How about

m ** n = pure (*) * m * n

Hth,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] types and braces

2009-04-17 Thread Josef Svenningsson
Conor,

I'd like to point out a few things that may help you on the way.

On Wed, Apr 15, 2009 at 8:58 PM, Conor McBride
co...@strictlypositive.org wrote:
 I don't immediately see what the clash in that context would be - I
 *think* what you propose should be doable. I'd be interested to know
 what you come up with, or I might have a look at it myself when I find
 a few minutes to spare.

 I've found that I can add a production

 atype :: { Type }
  ...
  | '{' trueexp '}'

 if I remove the productions for record declarations

 constr1 :: { ConDecl }
      | con '{' '}'                   { RecDecl $1 [] }
      | con '{' fielddecls '}'        { RecDecl $1 (reverse $3) }

 which suggests that it is indeed the syntax

  data Moo = Foo {goo :: Boo Hoo}

 which is in apparent conflict with my proposed extension.
 The current parser uses the type parser btype to parse the
 initial segment of constructor declarations, so my change
 causes trouble.

 Further trouble is in store from infix constructors

  data Noo = Foo {True} :*: Foo {False}

 should make sense, but you have to look quite far to
 distinguish that from a record.

 So I don't see that my proposed extension introduces a
 genuine ambiguity, but it does make the parser a bit
 knottier.

Remember that even though your parser in unambiguous that doesn't mean
that happy will be able to handle it. Happy deals with LALR grammars
and you have to confine yourself to that restriction in order to make
happy happy. Also, your example above suggests that your grammar might
require an infinite lookahead, something which happy doesn't deal
with.

Having said all this, there is a magic flag which you can give to
happy which will make all these headaches go away. The incantation is
--glr which makes happy produce generalized lr parsers which can deal
even with ambiguous grammars. I've never used this myself so I can't
give you any further advice than to point you in the general
direction. The happy manual is your friend.

Happy happy hacking.

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: pqueue-mtl, stateful-mtl

2009-02-16 Thread Josef Svenningsson
On Mon, Feb 16, 2009 at 2:30 AM, wren ng thornton w...@freegeek.org wrote:
 Louis Wasserman wrote:

 I follow.  The primary issue, I'm sort of wildly inferring, is that use of
 STT -- despite being pretty much a State monad on the inside -- allows
 access to things like mutable references?

 That's exactly the problem. The essential reason for ST's existence are
 STRefs which allow mutability.

I'd like to point out one other thing that ST provides, which is often
forgotten. It provides *polymorphic* references. That is, we can
create new references of any type.

So ST is a really magical beast. Not only does it provide mutation, it
also provides mutable references. And these are two different
features. Now, everyone agrees that mutation is not something that you
can implement in a functional language, so ST cannot be implemented in
Haskell for that reason. It has to be given as a primitive. But what
about polymorphic references? Can they be implemented in Haskell? The
Claessen conjecture (after Koen Claessen) is that they cannot be
implemented in Haskell. See the following email for more details:
http://www.haskell.org/pipermail/haskell/2001-September/007922.html

One could try and separate mutation and polymorphic references and
give them as two different primitives and implement ST on top of that.
But I haven't seen anyone actually trying that (or needing it for that
matter).

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell project proposals reddit

2008-12-10 Thread Josef Svenningsson
On Wed, Dec 10, 2008 at 5:34 AM, Don Stewart [EMAIL PROTECTED] wrote:
 I'd like to echo Jason's remarks earlier.

http://www.reddit.com/r/haskell_proposals/

 We've tried for a couple of years now to efficiently track 'wanted
 libraries' for the community, but never with much success.

 In particular, two approaches have been tried:

* a wiki page
* the 200{6,7,8} summer of code trac

 neither proved workable long term, likely because no one knew about
 them, they're harder to contribute to and other factors.

I think this is a wonderful initiative, but I can't shake the feeling
that reddit is the wrong forum for this. Since reddit is primarily a
news site it penalises old submissions and eventually moves them out
of the front page. I can't see how that behavior is good for our
purposes here. A project proposal that has a thousand upvotes
shouldn't be moved from the list just because the proposal itself is
old.

If we want something that works in the long run we want something like
reddit but without the aging of old submissions. I don't know of any
such thing but there's bound to be one, this being the internet after
all.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Philip Wadler video on Howard-Curry Correspondence ???

2008-11-27 Thread Josef Svenningsson
2008/11/27 Galchin, Vasili [EMAIL PROTECTED]:
 Hello,

 I am reading re-reading Prof. Wadler paper

 Proofs are Programs: 19th Century Logic and 21st Century Computing

 but also want to re-read watch his video on same subject.

Is it this talk you're after?
http://video.google.com/videoplay?docid=-4167170843018186532ei=sI0uSZT7Faf22gKd9NTqDQq=wadler+philip

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compilers

2008-11-26 Thread Josef Svenningsson
On Wed, Nov 26, 2008 at 11:14 PM, David Menendez [EMAIL PROTECTED] wrote:
 How old is nhc? I've always thought of it as one of the big three,
 but I don't really know how far back it goes compared to ghc.

The following page suggests that it was released mid 1994 but there
could of course have been earlier releases.
http://www.cs.chalmers.se/pub/haskell/nhc/old/

Perhaps Malcolm Wallace knows more.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] a really juvenile question .. hehehehe ;^)

2008-10-06 Thread Josef Svenningsson
On Mon, Oct 6, 2008 at 2:58 PM, Cale Gibbard [EMAIL PROTECTED] wrote:
 2008/10/6 Don Stewart [EMAIL PROTECTED]:
 dagit:
data and newtype vary in one more subtle way, and that's how/when they
evaluate to bottom.  Most of the time they behave identically, but in the
right cases they act sightly differently.  newtype is usually regarded as
more efficient than data.  This is because the compiler can choose to
optimize away the newtype so that it only exists at type check time.  I
think this is also possible with data in some, but not all, uses.

 The compiler *must* optimise away the use. They're sort of 'virtual'
 data, guaranteed to have no runtime cost.

 I'm not sure that I'd want to be that emphatic about what an
 implementation *must* do regarding something so operational.

 [..]

 We can say however that newtypes have no additional runtime cost in
 GHC regardless of the optimisation level you pick.

Not even that is true in general. One can in general end up doing
unnecessary work just for the sake of converting types.

Suppose you have a newtype Price = Price Int and you're given [Int]
and want to have [Price]. This is simple to do, just 'map Price'. But
since Price and Int are represented the same way this ought to be just
the identity function. But it is in general very difficult for a
compiler to figure out that this traversal of the list in fact is just
the identity function. Simple type conversions like these can
unfortunately force you to do some work even though the representation
is identical.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Red-Blue Stack

2008-09-26 Thread Josef Svenningsson
On Fri, Sep 26, 2008 at 7:18 PM, Stephan Friedrichs
[EMAIL PROTECTED] wrote:
 apfelmus wrote:
 [..]

 Persistent data structures are harder to come up with than ephemeral
 ones, [...]

 Yes, in some cases it's quite hard to find a persistent solution for a
 data structure that is rather trivial compared to its ephemeral
 counterpart. My question is: Is there a case, where finding a persistent
 solution that performs equally well is *impossible* rather than just
 harder? I mean might there be a case where (forced) persistence (as we
 have in pure Haskell) is a definite disadvantage in terms of big-O
 notation? Do some problems even move from P to NP in a persistent setting?

The only result I'm aware of is that of Nicholas Pippenger where he
shows that there are algorithms which are slower by a factor of log n
if one is not allowed to use mutation:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.36.670

Interestingly enough, this particular result does not carry over to
Haskell. The particular algorithm that he uses can actually be
implemented optimally using lazy evaluation, as show in the following
paper:
http://progtools.comlab.ox.ac.uk/members/oege/publications/jfp97

So a pure strict language is less efficient than a strict language
with mutation and a pure lazy language. Although intuitively a pure
lazy language should also be less efficient than a strict language
with mutation I'm not aware of any such results.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Call Graph Tool?

2008-06-27 Thread Josef Svenningsson
On Fri, Jun 27, 2008 at 12:39 PM, Claus Reinke [EMAIL PROTECTED] wrote:
 Assuming I get it included, is there any features in particular you'd
 want to
 see in there?  Note that if I do have it produce visualisations, they'll
 be
 static images as part of an analysis report rather than being
 interactive.

 I'd like the ability to show individual module dependencies, and then
 to collapse modules in one package to just the package, so I could
 zoom out and see how the packages relate to each other.  By
 package here I mean the A in A.B, A.C, etc.

 It seems like it would be fairly simple to use Language.Haskell.Parse
 to turn a set of modules into a graph, and then something to massage
 that and give it to dot.

 If you wanted to go down that route, try using 'ghc --make -v2'
 and translate that dependency graph to dot.

Yep, this is a pretty easy route and there is already a tool for doing
the translation: ocamldot. Don't be fooled by the name, it works on
Makefile's dependency information and in particular works well with
the output from ghc. Back in the days (like six years ago) I used it
on ghc itself. I also extended it so that different directories were
grouped together inside a box to get a better feel for the intended
structure of the program. ocamldot can be found here:
http://www.research.att.com/~trevor/ocamldot/

I should add that I didn't find the information the least bit helpful
so my general recommendation is to try to find some other method to
help understanding code.

All the best,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: ghc on Ubuntu Linux?

2008-05-18 Thread Josef Svenningsson
On Sun, May 18, 2008 at 3:24 AM, Galchin, Vasili [EMAIL PROTECTED] wrote:
 Hi Josef,

  What generates dist/setup-config? When I run runhaskell Setup.hs
 configure, nothing including dist/setup.config gets generated. ??

Ok, that sounds more alarming. Unfortunately I have no idea what could
be the cause of this. Hopefully someone with a bit more cabal
knowledge can help out.

Good luck,

Josef


 On Sat, May 17, 2008 at 11:02 AM, Josef Svenningsson
 [EMAIL PROTECTED] wrote:

 On Sat, May 17, 2008 at 1:00 PM, Galchin, Vasili [EMAIL PROTECTED]
 wrote:
  Josef,
 
  E.g.
  [EMAIL PROTECTED]:~/FTP/Haskell/unix-2.2.0.0$ runhaskell Setup.hs
  configure
  Configuring unix-2.3.0.0...
 
  Normally copious output ... ???
 
 That's what it should look like! It just means you have a recent
 version of cabal which doesn't spit out tons of information when
 configuring. Happily all is well.

 /Josef


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: ghc on Ubuntu Linux?

2008-05-17 Thread Josef Svenningsson
Vasili,

I have pretty much exactly the same set up as you seem to have. I
haven't had a single problem with running configure using cabal. In
what sense does it stop working?

Cheers,

Josef

2008/5/17 Galchin, Vasili [EMAIL PROTECTED]:
 PS I have always installed ghc first via the Ubuntu package installer
 followed by a build from ghc 6.8.2 source!

 Vasili


 On Sat, May 17, 2008 at 12:01 AM, Galchin, Vasili [EMAIL PROTECTED]
 wrote:

 ghc-pkg list gives me 


 [EMAIL PROTECTED]:~$ ghc-pkg list
 /usr/local/lib/ghc-6.8.2/package.conf:
 Cabal-1.2.3.0, GLUT-2.1.1.1, HUnit-1.2.0.0, OpenGL-2.2.1.1,
 QuickCheck-1.1.0.0, array-0.1.0.0, base-3.0.1.0,
 bytestring-0.9.0.1, cgi-3001.1.5.1, containers-0.1.0.1,
 directory-1.0.0.0, fgl-5.4.1.1, filepath-1.1.0.0, (ghc-6.8.2),
 haskell-src-1.0.1.1, haskell98-1.0.1.0, hpc-0.5.0.0, html-1.0.1.1,
 mtl-1.1.0.0, network-2.1.0.0, old-locale-1.0.0.0, old-time-1.0.0.0,
 packedstring-0.1.0.0, parallel-1.0.0.0, parsec-2.1.0.0,
 pretty-1.0.0.0, process-1.0.0.0, random-1.0.0.0, readline-1.0.1.0,
 regex-base-0.72.0.1, regex-compat-0.71.0.1, regex-posix-0.72.0.2,
 rts-1.0, stm-2.1.1.0, template-haskell-2.2.0.0, time-1.1.2.0,
 unix-2.3.0.0, xhtml-3000.0.2.1


 I am using version ghc-6.8.2.


 V

 On Fri, May 16, 2008 at 11:51 PM, Galchin, Vasili [EMAIL PROTECTED]
 wrote:

 Hello,

  I am running ghc and tools on Ubuntu. I seem to be running into very
 strange problems between ghc-pkg and the Ubuntu package manager. Suddenly
 runhaskell Setup.hs configure just stops working ... Are any other
 ghc/Ubuntu users running into problems? If so, please tell me what problems.
 This is killing my Haskell work!

 Regards, Vasili



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] STM example code

2008-03-17 Thread Josef Svenningsson
2008/3/9 Galchin Vasili [EMAIL PROTECTED]:
   I am playing around with the STM API. I would like to see examples of
 STM other than the Santa.hs as I am having problems with STM vs IO.

Here's my implementation of the Dining Philosophers in STM:
http://computationalthoughts.blogspot.com/2008/03/some-examples-of-software-transactional.html

I hope you'll find it helpful.

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] lazy evaluation

2008-02-06 Thread Josef Svenningsson
On Feb 6, 2008 3:06 PM, Miguel Mitrofanov [EMAIL PROTECTED] wrote:

 On 6 Feb 2008, at 16:32, Peter Padawitz wrote:

  Can anybody give me a simple explanation why the second definition
  of a palindrome checker does not terminate, although the first one
  does?
 
  pal :: Eq a = [a] - Bool
  pal s = b where (b,r) = eqrev s r []
 
  eqrev :: Eq a = [a] - [a] - [a] - (Bool,[a])
  eqrev (x:s1) ~(y:s2) acc = (x==yb,r) where (b,r) = eqrev s1 s2
  (x:acc)
  eqrev _ _ acc  = (True,acc)

 I.eqrev  (_|_) acc = (True, acc)
 II.a. eqrev 1 (_|_)  = ('1' == (_|_)  b, r) where (b,r) = eqrev
  (_|_) 1
By (I), (b,r) = (True, 1), so eqrev 1 (_|_)  = ((_|_),1)
 II.b. eqrev 1 1  = ('1' == '1'  b, r) where (b,r) = eqrev 
  1
(b,r) = (True,1), so eqrev 1 1  = (True,1)

 Therefore, the least fixed point of \r - eqrev 1 r  is 1 and
 the answer is True.

  pal :: Eq a = [a] - Bool
  pal s = b where (b,r) = eqrev' s r
 
  eqrev' :: Eq a = [a] - [a] - (Bool,[a])
  eqrev' (x:s1) ~(y:s2) = (x==yb,r++[y]) where (b,r) = eqrev' s1 s2
  eqrev' _ _   = (True,[])

 I.  eqrev'  (_|_) = (True,[])
 II.a. eqrev' 1 (_|_) = ('1' == (_|_)  b, r ++ [(_|_)]) where (b,r)
 = eqrev'  (_|_)
By (I), (b,r) = (True,[]), so eqrev' 1 (_|_) = ((_|_),[(_|_)])
 II.b. eqrev' 1 [(_|_)] = ('1' == (_|_)  b, r ++ [(_|_)]) where
 (b,r) = eqrev'  []
(b,r) = (True,[]), so eqrev' 1 [(_|_)] = ((_|_),[(_|_)])
 Therefore, the least fixed point of \r - eqrev' 1 r is [(_|_)] and
 the answer is (_|_). No wonder it hangs.

This proof also shows us where the problem lies and how to fix it. It
turns out to be really easy: change 'r++[y]' to 'r++[x]' and the
program works.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Missing join and split

2008-01-01 Thread Josef Svenningsson
On Dec 28, 2007 11:40 PM, Mitar [EMAIL PROTECTED] wrote:
 Would not it be interesting and useful (but not really efficient) to
 have patterns something like:

 foo :: Eq a = a - ...
 foo (_{4}'b') = ...

 which would match a list with four elements ending with an element 'b'. Or:

 foo (_+';'_+';'_) = ...

 which would match a list with embedded two ';' elements. (Last _
 matched the remaining of the list.)

I suggest you take at look at HaRP, Haskell Regular Patterns:
http://www.cs.chalmers.se/~d00nibro/harp/

It hasn't been updated for a while but it should still be useable.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Graph theory analysis of Haskell code

2007-12-06 Thread Josef Svenningsson
This sounds like a fun project and it is certainly feasible to do. I
thought I'd give you some pointers to fun stuff that people have been
doing in the past.

Thomas Reps have been doing program analysis since the dawn of time,
but one paper that seems particularly related to what you try to do is
Identifying modules via concept analysis. You can find that and the
rest of his papers on his homepage:
http://pages.cs.wisc.edu/~reps/

There are many different characteristics of a program graph that can
be of interest. One that has recently given rise to some interesting
results is the notion of tree width. An example of it's is the
following paper:http://citeseer.ist.psu.edu/601409.html

Have fun,

Josef

On Dec 6, 2007 12:46 AM, Ivan Miljenovic [EMAIL PROTECTED] wrote:
 This isn't strictly Haskell related, but anyway.

 Next year I will be doing my honours in mathematics.  One possible
 topic for my thesis that I've thought of - and my supervisor is quite
 enthused about - is to use graph theory to analyse various textual
 sources, starting with source code but leaving the framework open
 enough to be able to extend it to other sources (e.g. email address
 books).

 How I envisage it happening is that a parser would be used to find all
 functions in the given code, treat these as nodes in the graph and
 then use directed edges to indicate which functions call other
 functions.  This resultant graph can then be analysed in various ways
 suitable to the context (e.g. find that a library module can be split
 into two since there are two completely separate trees present in the
 graph that don't interact at all, or if a function is only ever called
 by one other function then it can be subsumed into it).

 So, here is the question I ask of all of you: is this feasible?  Do
 you know if anything like this has ever been attempted before?  I know
 there are some other usages of graph theory related to source code
 (e.g. McCabes complexity metric [1]), but I couldn't seem to find
 anything related to what I'm proposing.  I intend to code this up in
 Haskell (possibly using FGL: I know of it, but haven't really looked
 at it) and use Haskell as my primary target for analysis, so in a
 sense the resultant graph could be seen as a Haskell equivalent to
 UML.


 [1] http://en.wikipedia.org/wiki/Cyclomatic_complexity

 --
 Ivan Lazar Miljenovic
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] -O2 bug in GHC 6.8.1?

2007-11-20 Thread Josef Svenningsson
On Nov 20, 2007 4:32 PM, Ian Lynagh [EMAIL PROTECTED] wrote:

 Hi Brad,

 On Tue, Nov 20, 2007 at 09:50:02PM +1000, Brad Clow wrote:
 
  $ ./test
  23
  24

 I can't reproduce this. Can you please tell us what platform you are on
 (e.g. x86_64 Linux) and what gcc --version says?

 Also, where did your GHC come from, e.g. bindists from the download
 page, self-compiled?

I can also reproduce this on an Ubuntu machine with a self-compiled
ghc. gcc is 4.1.2.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[2]: [Haskell-cafe] Fusing foldr's

2007-10-30 Thread Josef Svenningsson
On 10/29/07, Bulat Ziganshin [EMAIL PROTECTED] wrote:
 you may also look at these data:

   1,225,416 bytes allocated in the heap
 152,984 bytes copied during GC (scavenged)
   8,448 bytes copied during GC (not scavenged)
  86,808 bytes maximum residency (1 sample(s))

   3 collections in generation 0 (  0.00s)
   1 collections in generation 1 (  0.00s)

 if your hypothesis is true, amount of data copied and number of
 generation-1 collection should be much less in the second case

Indeed.

avg4:
880,935,612 bytes allocated in the heap
319,064,404 bytes copied during GC (scavenged)
318,965,812 bytes copied during GC (not scavenged)
201,080,832 bytes maximum residency (9 sample(s))

   1681 collections in generation 0 (  1.67s)
  9 collections in generation 1 ( 13.62s)

avgP:
1,761,224,604 bytes allocated in the heap
714,644 bytes copied during GC (scavenged)
593,184 bytes copied during GC (not scavenged)
184,320 bytes maximum residency (2 sample(s))

   1908 collections in generation 0 (  0.04s)
  2 collections in generation 1 (  0.00s)

Allocation is cheap, copying expensive.

All the best,

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Letting the darcs test fail, if QuickCheck tests fail

2007-10-30 Thread Josef Svenningsson
On 10/30/07, Henning Thielemann [EMAIL PROTECTED] wrote:

 When following the description on
  
 http://www.haskell.org/haskellwiki/How_to_write_a_Haskell_program#Add_some_automated_testing:_QuickCheck
   then darcs will run the QuickCheck tests on each 'darcs record', but the
 new patch is also accepted by darcs if one of the tests fail. What is the
 most simple way to let 'darcs record' fail, when a QuickCheck test fails?

The same thing bit me when I prepared a package recently. The way I
solved it was to call the function quickCheck' instead of test. It
returns a boolean indicating if the test was successful or not. If
it's false I call exitWithFailure. I posted some code to the wikibook:
http://en.wikibooks.org/wiki/Talk:Haskell/Packaging

Note that quickCheck' is only available in QuickCheck 2.

All the best,

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fusing foldr's

2007-10-29 Thread Josef Svenningsson
On 10/28/07, Isaac Dupree [EMAIL PROTECTED] wrote:
 Josef Svenningsson wrote:
  Less bogus timing:
  avg4: 18.0s
  avgS: 2.2s
  avgP: 17.4s
 
  OK, so these figures make an even stronger case for my conclusion :-)
  Single traversal can be much faster than multiple traversals *when
  done right*.

 Did you use +RTS -N2 on your program (or whatever it is that makes GHC
 actually use multiple threads? or is that not necessary?)  Anyway I
 assume you wouldn't get better than 9.0s, which is still much worse than
 2.2s.

Oh, this is getting embarrassing.. Indeed, I forgot to use +RTS -N2.
But using those flags yielded a very interesting result:

avgP: 4.3s

Superlinear speedup!? As you say, I would have expected something
slightly larger than 9s. I think what happens here is that for avg4
the entire list has to be kept in memory between the two traversals
whereas for avgP the beginning of the list can be garbage collected
incrementally as the two threads traverse it. This could mean that the
list never moves to the second generation in the memory manager and
that can maybe account for the additional time savings. I'm not sure
how to verify that this is the case though.

Cheers,

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fusing foldr's

2007-10-29 Thread Josef Svenningsson
On 10/29/07, Josef Svenningsson [EMAIL PROTECTED] wrote:
 But using those flags yielded a very interesting result:

 avgP: 4.3s

 Superlinear speedup!? As you say, I would have expected something
 slightly larger than 9s. I think what happens here is that for avg4
 the entire list has to be kept in memory between the two traversals
 whereas for avgP the beginning of the list can be garbage collected
 incrementally as the two threads traverse it. This could mean that the
 list never moves to the second generation in the memory manager and
 that can maybe account for the additional time savings. I'm not sure
 how to verify that this is the case though.

Bulat kindly suggested I use +RTS -s to monitor the garbage collectors
behavior. It seems my hypothesis was right.

avg4:
387 Mb total memory in use
MUT   time2.43s  (  2.47s elapsed)
GCtime   15.32s  ( 16.05s elapsed)

avgP (+RTS -N2):
3 Mb total memory in use
MUT   time4.61s  (  2.51s elapsed)
GCtime0.04s  (  0.06s elapsed)

So it seems that the garbage collector takes an awful lot of time when
we allocate a big list like this. Hmmm. Strikes me as somewhat
suboptimal.

Cheers,

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fusing foldr's

2007-10-27 Thread Josef Svenningsson
On 10/26/07, Dan Weston [EMAIL PROTECTED] wrote:
 Thanks for letting me know about the Data.Strict library on Hackage. I
 will definitely make use of that! BTW, you left out an import
 Data.List(foldl') in your example.

Yes, Data.Strict can be pretty handy for getting the right strictness.
Sorry about the missing import.

 My timing test is an order of magnitude worse than yours. Do you have an
 extra zero in your list endpoint?

   I fed these functions to ghc with the -O2 and -threaded flags and
   timed them using the list [1..1000]. The result (best times out of
   several runs):
   avg4: 284 ms
   avgS: 184 ms
   avgP: 248 ms

 Using ghc -threaded -O2 --make Avg.hs for ghc 6.6.1, I ran your tests on
 [1..1000] and got the user times:

 avg4: 12.75 s
 avgS:  3.65 s
 avgP: 15.56 s

 The funny thing is that avg4/avgS = 3.5 for and only 1.5 for you. I
 understand that with only 1 processor my avgP time may be twice yours,
 but not the avgS or avg4.

Oooops.. My numbers are totally bogus. I had code that looked like the
following:
\begin{code}
main = do
time avg4 [1..1000]
time avg4 [1..1000]
time avg4 [1..1000]
time avgS [1..1000]
time avgS [1..1000]
time avgS [1..1000]
time avgP [1..1000]
time avgP [1..1000]
time avgP [1..1000]
\end{code}
Not very elegant I know but I thought it would do the job. Apparently
I was wrong. GHC does common subexpression elimination on all the
lists so they're all shared between the different calls. Of course,
the first function call would always take long time but I ignored it,
thinking it was some anomaly. Anyway, I was totally sure that GHC only
did cse on constructor expressions and not on arbitrary computations.
Guess I was wrong. A little searching revealed the following quote by
Simon PJ:

 GHC does a very simple form of CSE. If it sees
let x = e in e
 it replaces the inner 'e' by x.  But that's all at the moment.

Lesson learned.

Less bogus timing:
avg4: 18.0s
avgS: 2.2s
avgP: 17.4s

OK, so these figures make an even stronger case for my conclusion :-)
Single traversal can be much faster than multiple traversals *when
done right*.

 I have the following machine:

 Main memory size: 2026 Mbytes
 Num Processors: 1
 Processor Type: Intel(R) Xeon(TM) CPU 2.80GHz x32
 Clock Speed: 2790 MHZ

In case you're still interested my machine looks like this:

Memory: 2026 Mbytes
Processor: AMD Turion(tm) 64 X2 Mobile Technology TL-56
Clock Speed: 1800MHz

All the best,

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fusing foldr's

2007-10-26 Thread Josef Svenningsson
Sorry for reacting so late on this mail. I'm digging through some old mails...

On 10/12/07, Dan Weston [EMAIL PROTECTED] wrote:
 Always check optimizations to make sure they are not pessimizations!

 Actually, traversing the list twice is very cheap compared to space
 leakage, and accumulating pairs requires tuple boxing and unboxing which
 I don't know how to get GHC not to do.

I agree hole-heartedly that replacing multiple traversals with a
single traversal should be done with care as it more often than not
results in a pessimization. Indeed you showed just that with your
examples! But I'd thought it'd be interesting to see how it can
actually be an improvement if done carefully.

\begin{code}
import Control.Arrow
import qualified Data.Strict.Tuple as T
import Data.Strict.Tuple (Pair(..))
import Control.Parallel

avg4 = uncurry (/) . (foldl' (+) 0  foldl' (\x y - x + 1) 0)
avgS = T.uncurry (/) . foldl' (\p n - ((+n) *!* (+1)) p) (0 :!: 0)
avgP = uncurry (/) . (foldl' (+) 0 ! foldl' (\x y - x + 1) 0)

(*!*) f g (a :!: b) = f a :!: g b

(!) f g a = fa `par` (fa,ga)
  where fa = f a
ga = g a
\end{code}

avg4 is the function which was best among Dan's benchmarks. avgS uses
strict tuples. I just threw in avgP for fun, it traverses the lists in
parallel. Note: I do have a dual core machine so it makes sense to try
avgP.

I fed these functions to ghc with the -O2 and -threaded flags and
timed them using the list [1..1000]. The result (best times out of
several runs):
avg4: 284 ms
avgS: 184 ms
avgP: 248 ms

It seems doing a single traversal can be faster if your write your
function carefully. Doing the traversal in parallel was beneficial but
not as good as the single traversal.

Cheers,

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Binary constants in Haskell

2007-10-25 Thread Josef Svenningsson
On 10/24/07, Neil Mitchell [EMAIL PROTECTED] wrote:
 Hi

   Are there binary constants in Haskell, as
   we have, for instance, 0o232 for octal and
   0xD29A for hexadecimal?
 
  No, though it is an interesting idea.

 You can get pretty close with existing Haskell though:

 (bin 100010011)

 where bin :: Integer - Integer, and is left as an exercise for the
 reader. Obviously its not as high performance, as proper binary
 literals, but if you write them as top-level constants, they'll only
 be computed once and shouldn't end up being in the performance
 critical bits.

To make it efficient you could use Template Haskell and have the bin
function generate the constant which could then be spliced in. I
suppose it would look something like:
$(bin 100010011)

Not too bad.

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Mutable but boxed arrays?

2007-09-06 Thread Josef Svenningsson
On 9/6/07, Simon Marlow [EMAIL PROTECTED] wrote:
 Ketil Malde wrote:
  I, on the other hand, have always wondered why the strict arrays are
  called unboxed, rather than, well, strict?  Strictness seems to be
  their observable property, while unboxing is just an (admittedly
  important) implementation optimization.  I imagine that it'd be at least
  as easy to implement the strictness as the unboxedness for non-GHC
  compilers, and thus increase compatibility.

 You're quite right, that was a mistake, we should have called them strict
 arrays.  Hugs implements the unboxed arrays without any kind of unboxing, I
 believe.

Any chance of a Data.Array.Strict and deprecating Data.Array.Unboxed?

Cheers,

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Small question

2007-08-10 Thread Josef Svenningsson
On 8/10/07, John Meacham [EMAIL PROTECTED] wrote:
 On Thu, Aug 09, 2007 at 06:37:32PM +0100, Andrew Coppin wrote:
  Which of these is likely to go faster?
   type Quad = (Bool,Bool)
 ...
   data Quad = BL | BR | TL | TR
 ...
  I'm hoping that the latter one will more more strict / use less space.
  But I don't truely know...

 The second one will be signifigantly better for a couple reasons. A
 simple counting of values that they can take on will show not only this
 but that they are not isomorphic even,

 (Bool,Bool) can be one of

 _|_
 (True,True)
 (True,False)
 (False,True)
 (False,False)
 (_|_,True)
 (_|_,False)
 (_|_,_|_)
 (True,_|_)
 (False,_|_)

 that is a total of 10 different cases, each time a bottom might appear,
 a thunk evaluation (or indirection) is involved.


 now, take the second case

 data Quad = BL | BR | TL | TR

 the possible values are

 _|_
 BL
 BR
 TL
 TR

 a whole half of the other representation.


Well, there are ways to improve the situation. If you want to remove
all the bottoms in your type you can define Quad as:

type Quad = Data.Strict.Tuple.Pair Bool Bool

I'm not sure how much speed this will gain in practice and whether it
will beat the four constructor data type though. If anyone does some
measurements it'd be interesting to know.

Cheers,

Josef

PS. Data.Strict.Tuple lives in the strict package which can be found here:
http://www.cse.unsw.edu.au/~rl/code/strict.html
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Re: Re: monad subexpressions

2007-08-03 Thread Josef Svenningsson
On 8/3/07, Chris Smith [EMAIL PROTECTED] wrote:
 Neil Mitchell [EMAIL PROTECTED] wrote:
  I'm not convinced either, a nice concrete example would let people
  ponder this a bit more.

 I tried to provide something in my response to Simon.  Here it is again:

  One could sugar:

  do tax - getTax
 return $ map (\price - price * (1 + tax)) bill

  into:

  do return $ map (\price - price * (1 + (- getTax))) someNums

I think what Simon is worried about here is that the syntax in the
latter expression suggests that the effects of getTax will be
performed every time the lambda is applied. After all getTax appears
inside the lambda. But in fact is the side effects will only be
performed once. I agree with Simon that (- getTax) shouldn't be
promoted outside a lambda.

Fwiw, I'm all in favor for some new piece of syntax for this problem.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Polymorphic variants

2007-07-25 Thread Josef Svenningsson

On 7/26/07, Jon Harrop [EMAIL PROTECTED] wrote:


Does Haskell have anything similar to OCaml's polymorphic variants?



No as such, but it's possible to simulate them. As always Oleg was the
one to demonstrate how:
http://okmij.org/ftp/Haskell/generics.html

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Practical Haskell question.

2007-07-22 Thread Josef Svenningsson

Michael,

I think what you're trying to do is perfectly doable in Haskell and I think
the right tool for it is arrows, as Tomasz Zielonka mentioned before. I
suggest you take a look at the following paper which uses arrows to enforce
security levels in the code:
http://www.cis.upenn.edu/~stevez/papers/abstracts.html#LZ06a

Cheers,

Josef

On 6/25/07, Michael T. Richter [EMAIL PROTECTED] wrote:


 On Mon, 2007-25-06 at 12:19 +0300, Benja Fallenstein wrote:

2007/6/25, Michael T. Richter [EMAIL PROTECTED]:

 OK, just to prevent this getting side-tracked: I'm absolutely
uninterested in the results of performActionA before determining if
performActionB is permitted/possible/whatever.  Think more in terms of
security permissions or resource availability/claiming than in terms of
chaining results.  I want to know before I begin to collect the results of
performAction* that I will actually stand a chance at getting results at
all.


Uh, the posts you quote were precisely about how to do that. No
side-tracking going on. :-)


It looked to me like there were people arguing about whether the x
returned from one action was going to be used in the next action.

Let me try and rephrase the question.  [image: :)]

A conventional approach to what I'm doing would be something like this (in
bad pseudocode):

doStuff():
if checkPossible([opA, opB, opC]):
A
B
C
else:
exception Preconditions not met

My objection to this is its error-prone scaffolding:

   1. There's no enforced link between the checking operations and the
   actual operations.  By mistake or by deliberate action it is possible to put
   operations in the main body which have not been checked by the guards.
   2. As code evolves and changes, it is very easy to have the check
   diverge from the contents of the body as well.


Now if the actions were trivial or easily reversible, an alternative model
is something like this (in bad pseudocode) where it's assumed that each
operation checks for its privileges/capabilities/whatever as part of its
operation:

doStuff2():
A
try:
B
try:
C
catch:
undoB
throw
catch:
undoA

This looks to me like Don Stuart's executable semi-colons and could be
coded as a pretty conventional monad (unless my eyes are deceiving me).  But
if doing A, say, involved expensive operations (think: generating an RSA key
or making a database connection on a heavily-loaded server) or if doing B
involved modifying some external state that is difficult to undo this is a
less-than-ideal model.  Let's say that C fails for whatever reason
(insufficient privileges, the database server is dead, the phase of the moon
is wrong for the species of chicken sacrificed at the keyboard -- anything),
then we've got time wasted in A and B has just changed something we can't
easily unchange.

So I'd like some way of getting the automated check of
permission/capability/availability/whatever done before performing the
actual actions.

Now in a language where functions are identifiable types, a solution could
look like this (among a myriad of other possible solutions):

check(Operation):
case Operation of:
A:
return checkConditionA
B:
return checkConditionB
C:
return checkConditionC

runStuff(actions):
for each action in actions:
if not check(action.left):
throw CheckFailure
for each action in actions:
action.left(action.right)

doStuff3():
actions=[(A, a_args), (B, b_args), (C, c_args)]
try:
runStuff(actions)
catch CheckFailure:
actions=nil

The check() function called here can use the identity of the action
provided plus any information provided externally (network connections open,
permissions available, etc.) to pass/fail the
capabilities/resources/whatever and the action's execution is deferred until
the check has passed.  The action's check *AND* its execution is unavailable
to the programmer so there's less room for fraud and oversight and all the
other things which make programs buggy and unreliable and such joys to work
with both as a programmer and as a user.  In fact with languages as
malleable as Ruby (or possibly even Python) some of the ugly scaffolding
above could be made to vanish behind the scenes leaving pretty clean code
behind.  (Uglier languages like C/C++, of course, would leave all the
scaffolding lying around, but it would still be doable.)

But of course this can't be done in Haskell this way because functions
aren't items in Haskell.  There is no function equality check.  My check()
function can't look like:

check :: (a-b)
check A = ...
check B = ...
check C = ...
check _ = error no such function

This leaves me in a position I can't think myself out of (hence the cry
for help).  I'd like it to be possible to have a do block with as little
visible scaffolding as possible (ideally *none*) where I 

Re: [Haskell-cafe] reimplementing break (incorrectly) quickcheck p list gives me feedback that it breaks on list, but not what predicate test caused the breakage

2007-07-06 Thread Josef Svenningsson

On 7/6/07, Thomas Hartman [EMAIL PROTECTED] wrote:


I am a total quickcheck noob. Is there a way to find out what predicate test
function is, below?


The trick that I know of to do that is to not generate a function in
the first place, but a data type which can represent various functions
of this type. Whenever we want to use the data type as a function in
the test we convert it to a function. Let's take an example:

data IntFun = Plus5  | Mult5 deriving show

apply :: IntFun - Int - Int
apply Plus = (+ 5)
apply Mult = (* 5)

instance Arbitrary IntFun where
 arbitrary = elements [Plus,Mult]

prop_foo f a = apply f a == a

Ofcourse the data type will typically be much more complicated
depending on what kind of functions you want to be able to generate.

This trick is documented in the following paper:
http://www.st.cs.ru.nl/papers/2006/koop2006-TestingOfHigherOrderFunctionsAPLAS.pdf

HTH,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Abstraction leak

2007-07-01 Thread Josef Svenningsson

On 6/30/07, Jon Cast [EMAIL PROTECTED] wrote:

On Friday 29 June 2007, Jon Cast wrote:
 Here's my solution (drawn from a library I'll be posting Real Soon Now):
snip solution

I forgot to point out that this is 75-90% drawn from a library called
Fudgets[1], which is probably the most extended practical meditation to date
on programming with lazy streams in Haskell.  Embedding that approach in a
monadic interface seems to be my own idea, though.


Koen Claessen had the same idea. He used it for designing parsers. See:
http://www.cs.chalmers.se/~koen/pubs/entry-jfp04-parser.html


Jonathan Cast
http://sourceforge.net/projects/fid-core
http://sourceforge.net/projects/fid-emacs

[1] http://www.md.chalmers.se/Cs/Research/Functional/Fudgets/


Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Functional Data Structures

2007-06-21 Thread Josef Svenningsson

On 6/21/07, Michael T. Richter [EMAIL PROTECTED] wrote:


 Is there a good book or web site outlining decent pure-lazy-functional
data structures, with or without code samples?



Chris Okasaki's publication page is a goldmine when it comes to functional
data structures, lazy and otherwise.
http://www.eecs.usma.edu/webs/people/okasaki/pubs.html

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Higher order types via the Curry-Howard correspondence

2007-05-13 Thread Josef Svenningsson

I think both Benja's and David's answers are terrific. Let me just add
a reference.
The person who's given these issues most thought is probably Per
Martin-Löf. If you want to know more about the meaning of local
connectives you should read his On the Meanings of the Logical
Constants and the Justifications of the Logical Laws [1]. It consists
of three lectures which I think are quite readable although I can
recommend skipping the first lecture, at least on a first read-through
since it's pretty heavy going.
In the beginning of the third lecture you will find the classic quote:
the meaning of a proposition is determined by what it is to verify
it, or what counts as a verification of it
This is essentially what both Benja and David said.

hth,

Josef

[1] http://www.hf.uio.no/ifikk/filosofi/njpl/vol1no1/meaning/meaning.html

On 5/11/07, Benja Fallenstein [EMAIL PROTECTED] wrote:

Adding some thoughts to what David said (although I don't understand
the issues deeply enough to be sure that these ideas don't lead to
ugly things like paradoxes)--

2007/5/10, Gaal Yahas [EMAIL PROTECTED]:
 Since the empty list inhabits the type [b], this theorem is trivially
 a tautology, so let's work around and demand a non-trivial proof by
 using streams instead:

 data Stream a = SHead a (Stream a)
 sMap :: (a - b) - Stream a - Stream b

 What is the object Stream a in logic?

It's not that much more interesting than list. The 'data'
declaration can be read as,

To prove the proposition (Stream a), you must prove the proposition
'a' and the proposition 'Stream a.'

In ordinary logic this would mean that you couldn't prove (Stream a),
of course, but that just corresponds to strict languages in which you
couldn't construct an object of type Stream a (because it would have
to be infinite). To make sense of this, we need to assume a logic in
which we can have similar 'infinite proofs.' (This is the part where
I'm not sure it's really possible to do. I haven't read the Pierce
chapter David refers to.)

With that reading, (Stream a) is basically the same proposition as (a)
-- as evidenced by

f x = SHead x (f x)  -- f :: a - Stream a
g (SHead x) = x  -- g :: Stream a - a

We can find more interesting propositions, though. Here's an example
(perhaps not useful, but I find it interesting :-)):

data Foo a b = A a | Fn (Foo a b - b)

We can prove this proposition if we can prove one of these propositions:

a
a - b
(a - b) - b
((a - b) - b) - b
...

Each of these is weaker than the previous one; if x is a proof of
proposition n, then (\f - f x) is a proof of proposition n+1. The
fourth one is a tautology in classical logic, but not in
intuitionistic logic.

- Benja
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Bloom Filter

2007-05-01 Thread Josef Svenningsson

Hi,

Just a small comment on one of the comments.

On 5/1/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

Also, rather than this:

add :: Bloom a - a - Bloom a

a better argument order is this:

insert :: a - Bloom a - Bloom a

That way, you can use it with foldr.


Hmmm. If you want to create a Bloom using a fold wouldn't it make more
sense to use foldl'? I think the argument order is fine.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is Template Haskell a suitable macro language?

2007-04-24 Thread Josef Svenningsson

On 4/24/07, Jacques Carette [EMAIL PROTECTED] wrote:

In Ocaml, you can frequently use polymorphic variants to get the same
effect.

Which means that if you are willing to do enough type-class-hackery, it
should, in principle, be possible to do the same in Haskell.  But it
sure isn't as convenient!


You seem to imply that there is an encoding of polymorphic variants in
Haskell using type classes. While I know that it's possible to achieve
similar effects using type classes I haven't seen a direct encoding.
If there is such an encoding I would be very interested to hear about
it.


This all points to some clear need for more ``flavours'' of polymorphism
being needed (in Haskell), to be able to express *in the type system*
what TH allows you to say outside.


I totally agree with this.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [C2hs] anyone interested in developing a Language.C library?

2007-04-21 Thread Josef Svenningsson

On 4/21/07, Donald Bruce Stewart [EMAIL PROTECTED] wrote:

chak:
 Duncan Coutts wrote:
 If anyone is interested in developing a Language.C library, I've just
 completed a full C parser which we're using in c2hs.
 
 It covers all of C99 and all of the GNU C extensions that I have found
 used in practise, including the __attribute__ annotations. It can
 successfully parse the whole Linux kernel and all of the C files in all
 the system packages on my Gentoo installation.

 Great work!

 Using this as a basis for a Language.C would be a really worthwile project.


I think people should be very interested in this.

The ability to easily manipulate and generate C would quickly insert
Haskell into another useful niche. There must *surely* be real money
in writing nice Haskell programs that optimise/analyse/refactor/generate
C code...


Unfortunately the niche is not empty. There is an ocaml library called
cil which is supposed to be pretty sweet for manipulating C code. But
I still think a Haskell library would be a very good idea, and perhaps
one can look at cil for inspiration.

cil can be found here:
http://hal.cs.berkeley.edu/cil/

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] First order Haskell without Data

2007-04-19 Thread Josef Svenningsson

Hi,

Just a comment or two on the implications of converting higher-order
functions to data.

The paper you reference about this uses the method of
defunctionalization. This is a whole program transformation and might
therefore not be suitable in a compiler such as GHC or YHC. On the
other hand, it is more or less exactly what JHC uses

If you want to support separate compilation you should go for more
conventional closure conversion. This is in essence what every
compiler which compiles a higher order language does. But if you want
to have a typed language which can express this transformation you
will be needing quite a sophisticated type system.
http://www.cs.cmu.edu/~rwh/papers/closures/popl96.pdf

Just my 2 öre.

Josef

On 4/19/07, Neil Mitchell [EMAIL PROTECTED] wrote:

Hi,

I've been wondering what is essential to Haskell and what isn't. Not
from a point of view of what could be removed from the language, but
what could be removed from a Core language.

Given the features of higher-order, laziness and data types:

Laziness can be converted to higher-order functions

Data types can be converted to higher-order functions

Higher-order functions can be converted to Data


So as far as I can see it we have left:
{data-types only}
{higher-order functions only}

But we don't have laziness only, so my question is if it is possible
to represent all of Haskell in a first-order language without data
types, but with laziness. If its possible to do so in a strict
first-order language without data types, then I'd also be interested.

Are any of these known results?

Thanks

Neil

References:

* data types - higher order is in Efficient Interpretation by
Transforming Data Types and Patterns to Functions
http://www.cs.nott.ac.uk/~nhn/TFP2006/Papers/03-JansenKoopmanPlasmeijer-EfficientInterpretation.pdf

* laziness - functions, functions - data is in Definitional
Interpreters for Higher-Order Programming Languages
http://www.brics.dk/~hosc/local/HOSC-11-4-pp363-397.pdf
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] idea for avoiding temporaries

2007-03-09 Thread Josef Svenningsson

On 3/8/07, John Meacham [EMAIL PROTECTED] wrote:

it seems we can almost do this now without adding any new API calls,
just have 'thawArray' and 'freezeArray' perform the check, and behave
like 'unsafeThawArray' or 'unsafeFreezeArray' when they are the only
reference.

The compiler may even be able to statically replace some
calls to thawArray or freezeArray with the unsafe versions with an
extension of the rules syntax

{-# RULES forall a | unique a . freezeArray a = unsafeFreezeArray a #-}

this syntax actually came up in a previous discussion about wanting to
fire rules only when the argument is a constant literal (though, you don't
care about the particular constant value it is)

I imagine infering the uniqueness of values shouldn't be that hard as a
form of it is already done for avoiding thunk-updates.


You have to be careful here. Uniqueness typing is not the same as
usage analysis but the two are confusingly similar. Imagine a function
which takes, say, an array as it's argument. If it says that the array
is unique, then there must only be a single reference to that array as
the function probably updates it destructively. Compare this to the
situation where we say that the array is only used once in the
function. The array may have other references to it in this case, the
function doesn't care.
This boils down to the fact that the usage analysis propagates
information to the point where a value is created; should the thunk be
updateable or not? Whereas with uniqueness analysis we propagate
information to the point where it is used: is it OK to destructively
update the value instead of copying it?

I'm also not sure that inferring uniqueness types in the compiler
would be the way to go. I think David wants to be certain that his
arrays are not copied. Having to inspect the output of the compiler is
not the nicest way to do this. Some mechanism which enables the
programmer to tell the compiler what his intent is (such as the
uniqueness type system a la Clean) would be much preferable.

Of course, that doesn't mean that your idea is useless. It could
still, and probably would be, a very valuable optimization in the
compiler. I just don't think that it will help David with his
particular problem.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] seq (was: Article review: Category Theory)

2007-01-19 Thread Josef Svenningsson

On 1/20/07, Brian Hulley [EMAIL PROTECTED] wrote:

Neil Mitchell wrote:
 Hi Brian,
 Is there any solution that would allow excess laziness to be removed
 from a Haskell program such that Hask would be a category?

 class Seq a where
seq :: a - b - b

 Then you have a different seq based on the types, and it doesn't go
 wrong. You would probably want deriving Seq support.

This seems an amazingly neat solution to a really terrible problem, so:

1) Does anyone know why this was not used in the first place?


It *was* used before. See section 6.2.7 of the Haskell 1.4 report. It
was throw out in Haskell98. I don't remember why though.


2) Would it be good to use this in future versions of Haskell?

3) Is there any practical program which requires the current seq that could
not be rewritten to use the typeclass seq?


I'll pass on these two questions.

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[2]: [Haskell-cafe] State monad strictness - how?

2007-01-11 Thread Josef Svenningsson

On 1/11/07, Yitzchak Gale [EMAIL PROTECTED] wrote:

Josef Svenningsson wrote:
 Take the state monad for example. Should it be
 strict or lazy in the state that it carries
 around? What about the value component?
 ...both strict and lazy variants are useful.

I wrote:
 Are those really needed?

 ...it wouldn't be very convenient, would it?
 Sometimes I find that I want strict state by
 default and then I don't want to sprinkle my
 code with seqs.

I don't think that is so inconvenient. Why do we
need to define getStrict, putStrict, getsStrict,
etc., when it is perhaps even more clear to write
get $!, put $!., (gets . ($!)), etc.?.

The same goes for Iavor's monad library.


Indeed. I'm embarrassed that I've never realized this before. I
suppose I though the tuple solution was so elegant that I never
realized there was a simpler solution at hand.


 Now, the challenge here is to design a library
 which doesn't explode in size from all the
 various possibilities for strictness or
 laziness.

I am now pretty convinced that the only thing we
need is two versions of each monad, varying only
the strictness of =.

Then, of course, we will need runFoo for each, and
evalFoo and execFoo for each state monad.

And adaptors that allow you to run a lazy
calculation inside a strict one and vice-versa. So
we need an asStrict function and an asLazy
function for each lazy/strict pair of monads.

I think that is all we need. Not too bad.

I am very impressed that we get most of that
almost for free in Iavor's library.


Yes, it seems quite feasible.


 http://www.cs.chalmers.se/~josefs/monadlib/
 ...instantiating this with different pair types
 with different strictness properties will give
 us total control over strictness for state and
 value.

Hmm. Your current implementation doesn't seem to
do it that way. You use tuples for both the lazy
version and the strict version, and each defines
its own Monad instance for all Pair types. So it
is impossible to use both in the same module, even
with hiding.


The way I enable laziness in a strict monad and vice versa is to use a
non-standard bind operator, strictBind or lazyBind. But that's not
really scalable. The whole architecture that I used in my library
isn't really all that good. The reason I came up with it was to solve
a completely different problem which doesn't really apply to this
library anyway. The library design you outline above is indeed the way
to go.


I tried to work on this a little. I defined a
strict Pair type and tried to find a single Monad
instance that will give the right strictness for
both if you just vary between lazy and strict
pairs.

We need that both of the following converge
in constant stack space:

take 100 $ evalState (repeatM $ modify (+1)) 0
execStateStrict (replicateM_ 10 $ modify (+1)) 0

(You can verify that this is true if you use the
standard evalState, and replace execStateStrict
with runIdentity . execStateT.)

I was unable to hit upon the right combination of
seqs in the Monad instance. Is it really possible?

Of course, you could use a newtype of tuples and
define separate Monad instances. But then we are
not gaining anything over just defining the lazy
and strict monads directly.


I'm not sure exactly what you're trying to achieve here. If the tuple
type you have is strict in both components then you're never going to
get these examples to work. However, if you choose the lazy state
monad and choose tuples carefully then both example can be made to
terminate. Here's an example:
ex1 = take 100 $ evalState
 ((repeatM $ modify (+1))::StateP StrictLeft Int [()])
 0
ex2 = execStateStrict
 ((replicateM_ 10 $ modify (+1)) :: StateTP StrictLeft Int Identity ())
 0

The first example also terminates with the all lazy pair (,), the
importance is the laziness of the right component.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[2]: [Haskell-cafe] State monad strictness - how?

2007-01-10 Thread Josef Svenningsson

Yitzchak,

I agree with you that both lazy and strict monads are important and
that we should have both options in a monad library.

But the fun doesn't end there. There are other strictness properties
to consider. Take the state monad for example. Should it be strict or
lazy in the state that it carries around? What about the value
component? I think the answer to these questions are the same as for
monadic strictness above: both strict and lazy variants are useful.

Now, the challenge here is to design a library which doesn't explode
in size from all the various possibilities for strictness or laziness.
In fact I did once bite the bullet and came up with a library that
does all this. Though I haven't released it publicly yet. I never took
the time to polish the code to the point where I wouldn't be
embarrassed about showing it to others.

If you kick me hard enough I might release the library.

Cheers,

Josef

On 1/10/07, Yitzchak Gale [EMAIL PROTECTED] wrote:

Hi Bulat,

I wrote:
 [State and StateT] should be consistent. I
 would much prefer for them both to be lazy.

Bulat Ziganshin wrote:
 imho, lazy monads (as any other lazy things) is a source of beginner's
 confusion. therefore it may be better to provide default monads as strict
 and lazy ones - for one who knows what he wants - with a Lazy prefix, e.g.
 LazyST, LazyState...

Well, as long as both are provided, that is fine with me.

But I do not think that laziness in monad methods is a reason
for beginners' confusion.

First of all, it is natural to get a little confused about strictness
at the beginning. I'm not sure it happens more often with
monads than anywhere else.

If there is more confusion about strictness with monads,
it is because of the fact that many introductions/tutorials
confuse all monads with IO. They say something like:

So how do you create side effects in the real world? That is
what monads are for.

No, no, no! That is what ** IO ** is for. Most monads are pure!

In fact, I think making the default strict will create more confusion.

We should help beginners to understand right from the start that
do-notation is not a procedure of commands for the computer
to carry out. It is just a special syntax for defining functions. We
use it when it is more natural to describe the effect of a function
in a step-by-step style, just as happens sometimes in mathematics.
But the compiler is under no obligation to follow our steps literally.

Except with IO - when dealing with the real world, we need
to be able to specify the exact order in which things happen.

ST represents using physical memory as a fast storage device.
So it is really IO in disguise.

Regards,
Yitz
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[2]: [Haskell-cafe] State monad strictness - how?

2007-01-10 Thread Josef Svenningsson

On 1/10/07, Yitzchak Gale [EMAIL PROTECTED] wrote:

Hi Josef,

Josef Svenningsson wrote:
 ...the fun doesn't end there. There are other strictness properties
 to consider.

Could be. But after using mtl heavily for a few years now,
I find that in practice the only one where have felt the need
for control over strictness is =, like Dean's example.


Yes. For most uses this finer control of strictness is just overkill.
But in the rare cases when you really need this tweakability then it's
a royal pain if you don't have it.


 Take the state monad for example. Should it be strict or
 lazy in the state that it carries around? What about the value
 component? I think the answer to these questions are the same as for
 monadic strictness above: both strict and lazy variants are useful.

Are those really needed? Can't the strictness of the state be
fully controlled by seq with runState, get, and put, and by
choosing lazy or strict =? And similarly with value?


Yes, you're right. But it wouldn't be very convenient, would it?
Sometimes I find that I want strict state by default and then I don't
want to sprinkle my code with seqs. Furthermore this extra level of
control is not that difficult to implement in a library.


 Now, the challenge here is to design a library which doesn't explode
 in size from all the various possibilities for strictness or laziness.

The same challenge exists in many of the Data.* libraries.
I think this is very important.


Indeed.


 In fact I did once bite the bullet and came up with a library that
 does all this. Though I haven't released it publicly yet. I never took
 the time to polish the code to the point where I wouldn't be
 embarrassed about showing it to others.
 If you kick me hard enough I might release the library.

My boot is not long enough :). But I would love to see
what you did.


:-) Ok, I've put some files under the following url:
http://www.cs.chalmers.se/~josefs/monadlib/

It might need some explanation since I use the module system quite
heavily. For a monad such as the state monad the hierarchy looks like
this:
* Control.Monad.State.Base contains the type declaration and basic
 functionality, but NOT instances of the monad class.
 This module is not exposed.
* Control.Monad.State.Lazy
* Control.Monad.State.Strict
 Contains instances for monad classes.
* Control.Monad.State is supposed to be the default and just reexports
 Control.Monad.State.Strict.

Furthermore, again taking the state monad as example, the monad is
parameterized on the type of pair used in the definition of the monad.
So instead of:
newtype State s a = State { runState :: (s - (a, s)) }
we have:
newtype StateP p s a = StateP { runStateP :: (s - p a s) }

Now, instantiating this with different pair types with different
strictness properties will give us total control over strictness for
state and value. Data.Pair provides various pair for this purpose.

Enjoy,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Traversing a graph in STM

2006-09-19 Thread Josef Svenningsson

On 9/19/06, Jan-Willem Maessen [EMAIL PROTECTED] wrote:


On Sep 18, 2006, at 4:47 AM, Einar Karttunen wrote:

 On 18.09 01:23, Josef Svenningsson wrote:
 On 9/17/06, Jan-Willem Maessen [EMAIL PROTECTED] wrote:
 You can associate a unique name with each traversal, and store a set
 of traversals at each node (instead of a mark bit).  But this set
 grows without bound unless you follow each traversal with a
 cleaning
 traversal which removes a traversal tag from the set.  And you'd
 need some way to generate the unique names.

 Well, if your set implementation used weak pointers there would be no
 need for a cleaning traversal. The garbage collector will take
 care of
 that. The only slightly tricky thing is to keep a live pointer to the
 unique traversal name during the entire of the traversal. But I don't
 think that should be a big problem.


This just amounts to saying we can use the GC to implement the
cleanup traversal on our behalf.

Indeed.


I'd be quite surprised if this
were actually more efficient.

It is a lot more efficient in the sense that the GC is already
written. We don't have to implement a cleanup traversal ourselves.


But this is all a bit moot, as Einar
observes:

 This suffers from the problem that two traversals reading the
 same parts of the graph would have a good chance to make each other
 retry.

Any solution which stores traversal states in the nodes has this
problem.  Fundamentally you can't  update the state of graph nodes in
any way using STM and expect to run multiple traversals concurrently
over the same subgraph.


Alas, yes.

All the best,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Traversing a graph in STM

2006-09-17 Thread Josef Svenningsson

On 9/17/06, Jan-Willem Maessen [EMAIL PROTECTED] wrote:


On Sep 13, 2006, at 3:37 AM, Einar Karttunen wrote:

 Hello

 Is there an elegant way of traversing a directed graph in STM?

 type Node  nt et = TVar (NodeT nt et)
 type Edge  et= TVar et
 data NodeT nt et = NodeT nt [(Node nt et, Edge et)]

 type MyGraph = Node String Int

 When implementing a simple depth first search we need a way to
 mark nodes (= TVars) as visited. In addition multiple concurrent
 searches should be possible.

 Is it possible to avoid passing around an explicit Set of visited
 nodes?

You can associate a unique name with each traversal, and store a set
of traversals at each node (instead of a mark bit).  But this set
grows without bound unless you follow each traversal with a cleaning
traversal which removes a traversal tag from the set.  And you'd
need some way to generate the unique names.


Well, if your set implementation used weak pointers there would be no
need for a cleaning traversal. The garbage collector will take care of
that. The only slightly tricky thing is to keep a live pointer to the
unique traversal name during the entire of the traversal. But I don't
think that should be a big problem.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Variants of a recursive data structure

2006-08-03 Thread Josef Svenningsson

Klaus,

You've gotten many fine answers to your question. I have yet another
one which is believe is closest to what you had in mind. The key to
the solution is to add an extra type parameter to Labelled like so:

data Labelled f a = L String (f a)

Now you can use it to form new recursive type with Mu:

type LabelledExp = Mu (Labelled Exp)

And it is also possible to write an unlabel function which knows very
little about the structure of Exp:

unlabel :: Functor f = Mu (Labelled f) - Mu f
unlabel (Mu (L _ r)) = Mu (fmap unlabel r)

Another bonus is that it's all Haskell98.

The name I came up with for the trick of adding a higher-kinded type
parameter to Labelled is Functor Transformer. Transformer -
because it transforms the type it is applied to (in this case Exp),
and Functor - because when using Mu to tie the recursive knot one
often require the types to be instances of functor, as I'm sure you're
aware of.

Good luck with whatever it is you need this for.

Josef

On 8/3/06, Klaus Ostermann [EMAIL PROTECTED] wrote:

Hi all,

I have a problem which is probably not a problem at all for Haskell experts,
but I am struggling with it nevertheless.

I want to model the following situation. I have ASTs for a language in two
variatns: A simple form and a labelled form, e.g.

data SimpleExp = Num Int | Add SimpleExp SimpleExp

data LabelledExp = LNum Int String | LAdd LabelledExp LabelledExp String

I wonder what would be the best way to model this situation without
repeating the structure of the AST.

I tried it using a fixed point operator for types like this:

data Exp e = Num Int | Add e e

data Labelled a = L String a

newtype Mu f = Mu (f (Mu f))

type SimpleExp = Mu Exp

type LabelledExp = Mu Labelled Exp

The SimpleExp definition works fine, but the LabeledExp definition doesn't
because I would need something like Mu (\a - Labeled (Exp a)) where \
is a type-level lambda.

However, I don't know how to do this in Haskell. I'd need something like the
. operator on the type-level. I also wonder whether it is possible to
curry type constructors.

The icing on the cake would be if it would also be possible to have a
function

unlabel :: LabeledExp - Exp

that does *not* need to know about the full structure of expressions.

So, what options do I have to address this problem in Haskell?

Klaus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Code review: efficiency question

2006-05-03 Thread Josef Svenningsson

Brian,

You might also want to take a look at the list fusion functionality in
GHC which often can help optimize your programs when programming with
lists.
http://www.haskell.org/ghc/docs/latest/html/users_guide/rewrite-rules.html#id3153234
It doesn't help in your particular program but it might be usable for
you in the future.

Cheers,

/Josef

On 5/3/06, Brian Hulley [EMAIL PROTECTED] wrote:

Bulat Ziganshin wrote:
 [ideas including reverseMapM_]
 you will laugh, but speed of your two solutions depends on so many
 factors (including size of CPU cache) that noone can say that is
 better in general. although for small lists reverseMapM_ should be
 faster than reverse+mapM. what will be faster - using of higher-order
 function or direct recursion, i can't say, it's a really
 counter-intuitive area of ghc optimizer :)

 of course, i don't think that all that really matters for your program
 (drawing should anyway need much more time than looping). just use
 higher-level approach (that makes code simpler to write, understand
 and maintain) and don't bother your mind :)

Hi Bulat!
Thanks for the suggestions about reverseMapM_ etc.
It seems that since the speeds of the two solutions can be relatively
faster/slower on different platforms/CPUs I might as well just use the
combination of existing functions mapM_ and reverse at the moment to get
readable code with the least amount of effort :-)

Best regards, Brian.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] show for functional types

2006-04-05 Thread Josef Svenningsson
On 4/5/06, Robert Dockins [EMAIL PROTECTED] wrote:
 Hey, if we wanted a private conversation, we'd take it off-list. :-)

:-)

  Do you have any reference to the fact that there is any diagreement
  about the term? I know it has been used sloppily at times but I think
  it is pretty well defined. See section 4 of the following paper:
  http://www.dina.kvl.dk/~sestoft/papers/SondergaardSestoft1990.pdf

 http://en.wikipedia.org/wiki/Referential_transparency
 http://lambda-the-ultimate.org/node/1237

 It may be that experts have a well defined notion of what it means,
 but I think that non-experts (ie, most people) have a pretty vague
 idea of exactly what it means.

The wikipedia article was very enlightening on this subject,
especially the disclaimer on the top :-) Thanks!

  So it's a standard substitutivity property. The only problem here is
  that Haskell has a pretty hairy denotational semantics and I don't
  think anyone has spelled it out in detail.

 I think that may be the main problem, at least in this context.  We
 can take a nice lovely definition of rt, as above, but its
 meaningless without a good semantic model.  So we're forced to
 approximate and hand-wave, which is where the disagreement comes in.

Yes, I know what you mean. In the last few weeks this legacy of
hand-waving has been showing its ugly face on the haskell-prime list
as well. Maybe its time to sit down and define properly what we mean
in certain contexts. It seems we would need more than one semantics to
cover the different ways that reason about Haskell program. Just so
that one can say: Here I'll be using Fast'n-Loose Reasoning(TM) or
This law holds in the Handwaving Semantics(TM). The point is to be
able to reference these precisely defined approximations and thus
enjoying being both sloppy and precise at the same time.
But it's real work doing this kind of thing. During a graduate course
at Chalmers we started at something like this but it grew pretty hairy
pretty quickly.

Cheers,

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] multiple computations, same input

2006-03-27 Thread Josef Svenningsson
On 3/28/06, Neil Mitchell [EMAIL PROTECTED] wrote:
  This feels like a situation Parsec users would find themselves in all the
  time.  When you have a bunch of parsers in a 'choice', does the start of the
  input stream linger until the last parser is executed?

 No, as soon as one token is accepted from any parser, that input is
 decided upon, and it will never go back. If you want that behaviour
 you have to wrap the particular parser in try, which does give the
 backtracking (and space leak)

I personally find this behaviour terribly confusing. It makes writing
the parser highly unmodular. It forces me to know exactly what a
certain parser recognizes to know whether I need to wrap a 'try'
around it when I compose it in parallel with another parser. Which is
why I much prefer to use parsing libraries based on Koen Claessen's
technique which performs all parses at the same time. It works
breadth-first on the parse forest (the set of parse trees).
Text.ParserCombinators.ReadP is one example which uses this technique.

Cheers,

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Looking for an efficient tree in STM

2006-03-18 Thread Josef Svenningsson
Sorry for the slow reply,On 3/8/06, Einar Karttunen ekarttun@cs.helsinki.fi wrote:
Does anyone have an efficient tree implemented in STM thatsupports concurrent updates in an efficient fashion? Thisseems suprisingly hard to implement - a normal binarytree with links as TVar is very slow and does not scale
very well.I'm just wondering whether it is important for you that all the internal links are TVars. That of course depends on what kind of API you're imagining for your data structure. I would imagine though that a single TVar pointing to the root of your favourite functional tree implementation will be quite sufficient. My guess is it would also be the fastest way of implementing it.
Cheers,/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Expanding do notation

2006-01-07 Thread Josef Svenningsson
On 1/7/06, Chris Kuklewicz [EMAIL PROTECTED] wrote:
When you put print (head p) at then end, it keeps a reference to thewhole list p which is your space leak.If you want to store the headof p, this *should* work: main = do n - getArgs = return . read . head
 let p = permutations [1..n] headOfP - return (head p) mapM_ (putStrLn . concatMap show) $ take 30 p putStr $ Pfannkuchen( ++ show n ++ ) = 
 putStrLn . show $ foldl' (flip (max . steps 0)) 0 p print headOfPThe headOfP is computed as an IO action at that point in the program,so headOfP does not hold a reference to p as a whole.
Without having tried this I'd say that you should use Control.Excection.evaluate instead of 'return'. Or you could use 'seq'. I suspect that otherwise the 'head p' will not be evaluated until the last line and you will have the same problem as before.
Cheers,/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Papers from the 2005 Haskell Workshop (Tallinn)?

2005-10-11 Thread Josef Svenningsson
On 10/12/05, John Meacham [EMAIL PROTECTED] wrote:
I certainly think we should somehow centralize an index to papers onhaskell. I have found it extremely difficult to track down papers forauthors that have since moved out of academia or have passed on anddon't have their personal homepages with their papers anymore.

The have been efforts to try to do this before. Here is an example:
http://haskell.readscheme.org/
It seems that this site hasn't been updated properly. But it might make the starting point of a new attempt.

Cheers,

/Josef 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A Tool To Show Functions Relationship?

2005-06-09 Thread Josef Svenningsson
On 6/6/05, Dimitry Golubovsky [EMAIL PROTECTED] wrote:
 Does there exist a tool which given a Haskell source, shows functions
 that are mutually recursive (i. e. call each other, even via calling
 third, etc. functions)? Knowledge of that would help to split the
 module into smaller modules without risk to create recursive modules.
 
When you sent this mail I seemed to recall a simple tool written by
Martin Nordbäck which could take a Haskell module an generate its call
graph. But when I searched the web for it I couldn't find it.
But to my surprise I found it today when wading through the heaps of
old Haskell code that I have around (looking for something completely
different.) I'm attaching it in the hope that you will find it useful.
It may have suffered from slight bit rot but it should be fairly easy
to get it up and running.

Cheers,

/Josef
-- Copyright (C) 2001 Safelogic AB
-- This program is free software; you can redistribute it and/or modify
-- it under the terms of the GNU General Public License as published by
-- the Free Software Foundation; either version 2 of the License, or
-- (at your option) any later version.
-- Last change: 2001-04-27

import HsParser
import HsParseMonad
import HsSyn
import List((\\), nub, partition)
import System
import Char(isAlphaNum)

parse_contents :: String - ParseResult HsModule
parse_contents contents = parse contents (SrcLoc 0 0) 0 []

main :: IO ()
main = do
  args - getArgs
  main' args

main' files = do
  contents - mapM readFile files
  let allPairs = map get_names contents
  let allnames = map fst (concat allPairs)
  putStr digraph call_graph {\n
  let (subgraphs,arrows) = unzip $ map (subgraph allnames) (zip3 (map show [1..]) files allPairs)
  putStr $ unlines $ subgraphs
  putStr $ unlines $ arrows
  putStr }

subgraph allnames (num,name,pairs) =
  let shown = [ (name, filter (\x - x `elem` allnames) vars) 
  | (name, vars) - pairs]
  in (subgraph cluster_ ++ num ++ {\n ++
 label=\ ++ name ++ \;\n ++
 unlines [ show_name name ++ ; | (name,_) - shown ] ++
 }\n,
 unlines (map show_arrows shown))

get_names contents =
  let result = parse_contents contents
  in  case result of
Failed string - error Parse failed: 
Ok _ (HsModule mod exports imports decls) - 
  let pairs = map (get_vars_decl []) decls
  -- The first in the pair is the function name, this function
	  -- returns a list, but there will only be one element in it.
	  pairs' = [(name,vars) | (name:[],vars) - pairs]
  -- combine all names which are doubled
  pairs'' = combine_firsts pairs'
  in pairs''

combine_firsts pairs = case pairs of
  [] - []
  (name, _):_ -
let (same_list, other_list) = partition (\(x,_) - x==name) pairs
in (name, nub (concatMap snd same_list)):combine_firsts other_list

show_arrows :: (HsName, [HsName]) - String
show_arrows (name, calls) = case calls of
  --[] - show_name name ++ ;\n
  _  - unlines [show_name name ++  -  ++ show_name call ++ ; 
| call - calls ]

show_name :: HsName - String
show_name name = case name of
  Qual (Module mod) string - fix_name (mod ++ _ ++ string)
  UnQual string - fix_name string

fix_name :: String - String
fix_name name = \ ++ name ++ \
-- fix_name name = map (\x - if isAlphaNum x || x == '_' then x else '_') name

get_vars_decls :: [HsName] - [HsDecl] - ([HsName], [HsName])
get_vars_decls ignore decls = 
  let (names,vars) = unzip (map (get_vars_decl ignore) decls)
  in (concat names, concat vars)

get_vars_decl :: [HsName] - HsDecl - ([HsName], [HsName])
get_vars_decl ignore decl = case decl of
  HsFunBind _ [HsMatch _ name pats rhs decls] - 
let patvars = concatMap get_vars_pat pats
vars = get_vars_rhs (ignore++patvars) rhs ++ 
   snd (get_vars_decls (ignore++patvars) decls)
in ([name], nub vars) 
  HsPatBind _ pat rhs decls - 
let vars = get_vars_rhs ignore rhs ++
   snd (get_vars_decls (ignore++names) decls)
names = get_vars_pat pat
in (names, nub vars)
  _ - ([],[])

get_vars_rhs :: [HsName] - HsRhs - [HsName]
get_vars_rhs ignore rhs = case rhs of
HsUnGuardedRhs exp - get_vars_exp ignore exp
HsGuardedRhss guardedrhss - 
  concatMap (get_vars_guardedrhs ignore) guardedrhss

get_vars_guardedrhs :: [HsName] - HsGuardedRhs - [HsName]
get_vars_guardedrhs ignore rhs = case rhs of
  HsGuardedRhs _ e1 e2 - get_vars_exps ignore [e1,e2]

get_vars_exps :: [HsName] - [HsExp] - [HsName]
get_vars_exps ignore exps = concatMap (get_vars_exp ignore) exps

get_vars_exp :: [HsName] - HsExp - [HsName]
get_vars_exp ignore exp = case exp of
  HsVar name  - if name `elem` ignore then [] else [name]
  HsInfixApp e1 e2 e3 - get_vars_exps ignore [e1,e2,e3]
  HsApp e1 e2 - get_vars_exps ignore [e1,e2]
  HsNegApp e  - get_vars_exp ignore e
  HsLambda _ e- get_vars_exp ignore e
  HsLet decls e   - 
let (ignores,vars) = get_vars_decls (ignores++ignore) decls
in 

Re: [Haskell-cafe] What is MonadPlus good for?

2005-02-15 Thread Josef Svenningsson
On Mon, 14 Feb 2005 19:01:53 -0500, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 I was thinking more along the lines of Ralf Hinze's nondeterminism
 transformer monad:
 
 http://haskell.org/hawiki/NonDeterminism
 
 The relevant instance is this:
 
 instance (Monad m) = MonadPlus (NondetT m)
 
 That is, if m is a Monad, then NondetT m is a MonadPlus.  This is not
 true if a requirement for MonadPlus is that it include the mzero is a
 right zero for bind law.  Indeed, such a transformer is impossible to
 write if that law is a requirement.
 
Ah, I see. You are quite right.

  You claimed that monad transformers break the
  mzero-is-right-identity-for-bind law because they can be applied to
  IO. I say, it's not the monad transformers fault. They cannot possibly
  be expected to repair the law if they are given a faulty monad.
 
 IO is not a faulty monad.  It satisfies all of the laws that a monad is
 supposed to satisfy.
 
Sloppy terminology on my side again. What I meant to say is that any
MonadPlus instance of IO is faulty if we insist on the
mzero-is-right-identity-for-bind law. I agree with you that the law
should be dropped.

Cheers,

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What is MonadPlus good for?

2005-02-14 Thread Josef Svenningsson
On Sun, 13 Feb 2005 19:08:26 -0500, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 Quoting Josef Svenningsson [EMAIL PROTECTED]:
 
  I think it's unfair to the monad transformers to simply say that they
  don't obey the law. The interesting thing is whether they *preserve*
  the law. A monad transformer T preserves a law if given a monad M
  which obeys the law holds then the monad T M obeys the law.
 
 The law in question is that mzero is a right-zero for bind.  How can an
 underlying monad be said to obey this law if it doesn't support mzero?
 
You're of course absolutely right that it doesn't make sense to talk
about mzero being a right-identity for bind if the monad doesn't
support mzero. I should have been more clear. Let me have another try
at explaining myself.

Let's consider a specific monad transformer, say (ReaderT r). What I
hope to convince you of is that (ReaderT r) cannot be said break the
mzero-is-right-identity-for-bind law. Now, if we look at the MonadPlus
instance for (ReaderT r) it looks like this:
\begin{code}
instance (MonadPlus m) = MonadPlus (ReaderT r m) where
mzero   = ReaderT $ \_ - mzero
m `mplus` n = ReaderT $ \r - runReaderT m r `mplus` runReaderT n r
\end{code}
This important thing to note here is that the above instance
declaration relies on an underlying monad m with mzero and mplus. If
we try (and indeed succeed) to prove that (ReaderT r m) satisfies the
mzero-is-right-identity-for-bind law we will see that the proof depend
crucially on the fact that m also obeys the law. This is the best that
the monad transformer can do, namely to preserve the law.

You claimed that monad transformers break the
mzero-is-right-identity-for-bind law because they can be applied to
IO. I say, it's not the monad transformers fault. They cannot possibly
be expected to repair the law if they are given a faulty monad.

I hope this makes things a little clearer.

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What is MonadPlus good for?

2005-02-14 Thread Josef Svenningsson
On Mon, 14 Feb 2005 10:07:41 -0500, Jacques Carette [EMAIL PROTECTED] wrote:
 Josef Svenningsson [EMAIL PROTECTED] wrote:
  You claimed that monad transformers break the
  mzero-is-right-identity-for-bind law because they can be applied to
  IO. I say, it's not the monad transformers fault. They cannot possibly
  be expected to repair the law if they are given a faulty monad.
 
 Doesn't that argue for allowing proven and unproven Monads in Haskell?
 
I believe you misunderstand me. The point that I was trying to make
was about monad transformers. I was saying that the best they can do
with respect to a law is to preserve it, which I believe all monad
transformers should do. Otherwise they don't deserve the name.

Turning to monad they should certainly fulfil the laws we associate
with a monad. Otherwise they wouldn't be monads! I am not proposing or
encouraging that one should declare a data type an instance of monad
if they do not obey the monad laws.

Cheers,

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What is MonadPlus good for?

2005-02-13 Thread Josef Svenningsson
On Sun, 13 Feb 2005 17:59:57 -0500, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 G'day all.
 
 Quoting Remi Turk [EMAIL PROTECTED]:
 
  According to http://www.haskell.org/hawiki/MonadPlus (see also
  the recent thread about MonadPlus) a MonadPlus instance
  should obey m  mzero === mzero, which IO doesn't. IOW, the
  MonadPlus instance for IO (defined in Control.Monad.Error)
  probably shouldn't be there.
 
 Clearly the wiki page has not been updated to reflect the current
 debate. :-)
 
 I've changed the wording to this.  Anyone disagree?
 
 Note: There are theoretical reasons why ''mzero'' should be a
 right-zero for (=), but surprisingly few of the existing MonadPlus
 instances actually obey this law.  {{{IO}}} does not, and neither do
 any [MonadTransformer]s, since they may be stacked on top of {{{IO}}}.
 This suggests that either some of the extant MonadPlus instances are
 inappropriate, or that the law itself might be incorrect.  There is
 continuing debate over this, and the dust has not yet settled.
 
I think it's unfair to the monad transformers to simply say that they
don't obey the law. The interesting thing is whether they *preserve*
the law. A monad transformer T preserves a law if given a monad M
which obeys the law holds then the monad T M obeys the law. I haven't
checked if this is the case for any of the monad transformers in the
hierarchical libraries though. But I think that the wording should be
changed so that they aren't blamed for breaking the law.

(I can't believe I'm taking sides with monad transformers as if they
where human. I spend too much time hacking haskell I guess...)

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parse text difficulty

2004-12-10 Thread Josef Svenningsson
On Thu, 09 Dec 2004 10:18:12 -0500, Robert Dockins
[EMAIL PROTECTED] wrote:
   And I thought that most programmers used zipWith, which has to be
   prefix.
 
 Is this true?  Can you not use backticks on a partially applied
 function?  If so, it seems like such a thing would be pretty useful
 (although I've never actually had occasion to need it, so)  I'll dig
 out the report and check sometime, but does anyone know for sure that
 the following wouldn't work?
 
 [1..5] `zipWith (+)` [7..]
 
It is possible to emulate this behaviour with some operator trickery. See:
http://www.haskell.org/pipermail/haskell-cafe/2002-July/003215.html

/Josef
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: not possible with monad transformers ?

2004-12-01 Thread Josef Svenningsson
On Tue, 30 Nov 2004 18:36:46 + (UTC), Pavel Zolnikov
[EMAIL PROTECTED] wrote:
 [..]
 type M2 a = OuptutMonadT Maybe String a
 whenError:: M2 a - M2 a - M2 a 
 
 1   foo a b = do
 2  output before
 3  let r = liftM2 (+) a b 
 4   `whenError` $ reportError error
 5  return r
 
 whenError combines two computations so that if first fails it will use
 second instead.
 
FYI, this is what the class MonadPlus is meant for. Check out the
Monad module in the standard library.

/Josef
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] RE: [Haskell] Lexically scoped type variables

2004-11-25 Thread Josef Svenningsson
Let me just begin by sharing my experience with scoped type variables. I've
found them very useful in a project were I was to generalize a substantial
code base. Many of the functions had local definitions whose type were
simply not expressible without scoped type variables. During this work I
found an interesting interplay between multi parameter type classes and
scoped type variables. An example would look something like this:

f :: C a b c = a - b
f a = 
   let g = ...

The important thing to note is that the type variable c in f's signature
occurs neither in the argument nor the result type of f. Hence c is not
bindable by lexically scoped type variables. But suppose c is part of g's
type. Then we still have no way of expressing g's type! This thing has
bitten me and so I welcome the change you're planning.

As for your questions I would prefer 1b although I don't think it is a big
deal. For your second question I would go for 2c (which you've called 2b :).
That's because I don't use result type signatures. Other people will surely
disagree.

Cheers,

/Josef

 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
 Behalf Of Simon Peyton-Jones
 Sent: den 25 november 2004 11:44
 To: [EMAIL PROTECTED]
 Subject: [Haskell] Lexically scoped type variables
 
 | This is a great example, thanks for posting it. However, I feel like
 | the real problem in this example is the lexically-scoped type
 | variables declared with your function f.
 
 Paul's message gives me an excuse to consult the Haskell list about the
 design of lexically scoped type variables.   I have a two particular
 questions I'd like your advice about.  (This is nothing to do with
 monomorphism restriction, or Lennart's original puzzle.)
 
 I'm sending this message to the main Haskell list, to maximise
 readership, but I suggest that
 
   you reply to [EMAIL PROTECTED]
 
 Thereby to avoid spamming the main Haskell list, which is meant to be
 low-bandwidth.  I believe I've set the reply-to field of this message to
 achieve this goal.  [Perhaps others can do the same?!]
 
 Simon
 
  Background 
 
 First, for background you might want to glance at Lexically scoped type
 variables, an unpublished paper available at
   http://research.microsoft.com/%7Esimonpj/papers/scoped-tyvars
 I don't want to take time in this message to discuss why we want scoped
 type variables; I'll just take it for granted here.
 
 One design choice discussed in the paper, and implemented in GHC, is
 that we can bring new lexically-scoped type variables into scope by
 mentioning them in a pattern type signature:
   f (x::a) = 
 Here 'a' is a name for the type of x, and 'a' scopes over f's body, and
 can be mentioned in type signatures in f's body.  Sometimes one wants to
 name types in the result* of f, rather than it its *arguments*.  For
 example
   head (xs::[a]) :: a = ...
 Here, the :: a before the = gives the type of head's result.  If a
 result type signature binds a type variable, that type variable scopes
 over the right hand side of the definition.
 
 In GHC, it's not necessary for the type that 'a' names to be a type
 variable; it could be Int, for example:
   f (x::a) = x + (1::Int)
 Already this embodies several design choices.  For example, we know that
 'a' is a *binding* for 'a' because there is no 'a' already in scope,
 otherwise it'd be an *occurrence* of 'a'.  (An alternative would be to
 mark binding sites somehow.)  Also once could require that 'a' names a
 type variable rather than an arbitrary type.  But for this message I
 want to take GHC's design as given.
 
  QUESTION 1 -
 
 In GHC at present, a separate type signature introduces no scoping.  For
 example:
   f :: forall a. a - a
   f x = (x::a)
 would be rejected, because the type signature for 'f' does not make
 anything scope over the right-hand side, so (x::a) means (x::forall
 a.a), which is ill typed.
 
 An obvious alternative, taken up by (among others) Mondrian and
 Chamaeleon, is to say that
 
   the top-level quantified variables of a separate type signature
   scope over the right hand side of the binding
 
 In the above example, then, the 'a' bound by the 'forall' would scope
 over the right hand side of 'f'.
 
 I've argued against this in the past, because the construct (forall a.
 type) patently scopes 'a' only over type.  But my resolve is
 weakening.  It's just too convenient!  Furthermore, it binds a *rigid*
 type variable (i.e. 'a' really must name a type variable) and that's
 important for GADTs... which strengthens the case.
 
 In short, I'm considering adopting the Mondrian/Chameleon rule for GHC.
 There are two variations
 
   1a) In the example, 'a' is only brought into scope in the
right hand side if there's an explicit 'forall' written by
 the programmer
   1b) It's brought into scope even if the forall is implicit; e.g.
   f :: a - a
   f x = 

[Haskell-cafe] Interview with David Roundy on darcs and Haskell

2004-11-25 Thread Josef Svenningsson
Hi all,

At osdir there is a nice interview with David Roundy about darcs, the
revision control system written in Haskell. He has a few comments about
Haskell as well. Read it here:
http://osdir.com/Article2571.phtml

This was also covered on /. (which is where I found it).

/Josef

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] empty Array?

2004-10-11 Thread Josef Svenningsson
 It is, of course, trivial to implement this for lists.  I've run into
 a snag, however, when trying to implement this for Arrays (as in
 Data.Array) - I can't seem to find a way to represent an empty array,
 which makes implementing 'empty' and 'null' impossible.  Suggestions?
 
Empty arrays can be achieved by having bounds where the lower bound is
greater than the upper bound. An example would be:

 empty = array (1,0) []

The function null would check the bounds to see if the lower bound is
greater than the upper bound.

HTH,

/Josef

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: OCaml list sees abysmal Language Shootout results

2004-10-07 Thread Josef Svenningsson
Andre,
I very much enjoyed reading your blog entry. I would like to make a few 
comments.

First of all I heartly agree with what you call the main problem. I 
quote: The main problem I see with all this is that its just too hard 
for an average Haskell programmer to get good performance out of 
Haskell. This is just so true and something which we need to do 
something about. What I would like to comment on is why we have this 
problem and what we can/should do about it.

Why is it difficult to write good performance Haskell programs? Well, 
I'm not going to try and discuss this general question but focus on I/O 
since that is what this thread is really about. So let's formulate the 
question to: Why is it difficult to write good performance I/O in 
Haskell? From your blog entry I take it that you think that there is 
something wrong with the language Haskell. Maybe I misinterpret you but 
this is how I read it. I don't quite agree with this. What *is* true is 
that it is difficult to write good performance I/O in any of the 
*implementations* that we have. And this is ofcourse a shame. I think 
this is partly because fast I/O hasn't been that high a priority. I 
recall an email dating a few years back where Simon PJ was saying that 
they haven't spent that much time on making I/O blazingly fast. So 
perhaps bringing the issue on the table and making the implementors 
aware of it will improve on the situation.

Although I do not believe that the Haskell language itself is to blame 
for why it's difficult to write decent performing I/O, the standard 
libraries might be a problem. It may be that the functions they provide 
are difficult to make efficient and that they encourage abuse. One 
particular function I have in mind is getContents. Maybe we need another 
set of functions which while still being highlevel are much easier to 
implement efficiently. Work in this direction has already started with 
the BlockIO library. I think this is an exciting and promising route to 
take. Andre, you suggest adding syntax to help the programmer writing 
faster code. Somehow I don't think that is the right way to go, even if 
it is only my gut feeling. I think this problem can and should be solved 
on the library level rather than on the language design level. But I 
might ofcourse be wrong.

Ok, enough preaching for today. For the record, I also recommend O'Caml 
rather than Haskell to my performance-aware friends.

/Josef
Andre Pang wrote:
On 29/09/2004, at 8:41 AM, Graham Klyne wrote:
I can see that this requires the original file to be kept for 3-time 
scanning, so enough memory for the entire file will be required. Is 
that *the* problem to which you allude? I can't see any other problem 
here. And why would this put Haskell at a disadvantage?

I've been watching this thread with interest, and posted my own 
thoughts on this thread and Haskell's performance in general as a blog 
entry. Rather than repeat it all here, I'll post a link to it:

http://www.algorithm.com.au/mt/haskell/haskells_performance.html
The executive summary of my thoughts is that it seems to be entirely 
possible to optimise Haskell to be competitive with other, more 
performance-focused languages, but it's hard, and you have to be a 
Haskell expert to do so. One possible solution may be to allow for 
some extra, syntactically integrated declarations to be inserted by 
the programmer which enables much better optimisation (e.g. see how to 
write unboxed strict array example in Clean: much more clear and less 
work than using IOUArrays). Performance is the one major reason I 
recommend many existing C programmers try out O'Caml rather than 
Haskell as their first functional programming language, and it would 
be really nice if optimisation was made a bit easier.


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] dimension of arrays

2004-03-29 Thread Josef Svenningsson
On Mon, 29 Mar 2004, Fred Nicolier wrote:

 Is there a way to get the number of dimension of an array ? i.e.
 something like :

  dims :: (Ix a) = Array a b - Int
  dims = ...
  a = listArray (1,10) [1,2..]
  b = listArray ((1,1),(10,10)) [1,2..]
  dims a -- should be equal to 1
  dims b -- should be equal to 2

 The key is somewhere in the Ix class but where ?

In a sense Haskell arrays are always one dimensional. But as you noted
tuples are used to achieve higher dimensionality. As far as I know there
is no way of asking for the dimension of an array. You could write your
own class for that though. Here's a suggestion:

\begin{code}
dims :: (Ix a, HasDimension a) = Array a b - Int
dims arr = dimension (head (range arr))

class HasDimension a where
  dimension :: a - Int

instance HasDimension Int where
  dimension _ = 1

instance HasDimension Float where
  dimension _ = 1

instance HasDimension (a,b) where
  dimension _ = 2

instance HasDimension (a,b,c) where
  dimension _ = 3
\end{code}

And so forth. Beware. The code is untested.

Hope this helps.

/Josef
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Sat solver

2004-02-05 Thread Josef Svenningsson
Hi,

On Thu, 5 Feb 2004, Ron de Bruijn wrote:

 Hi there,

 I need a complete 3-CNF-Sat solver that can solve
 sentences of about length 20 (or shorter).

 Now I use simple model checking, but that's a bit slow
 , you understand :)

 I have seen some algorithms on the web and some
 code-sniplets in papers. But I presume there is some
 implementation available, so I thought: Let's ask, and
 don't reinvent the wheel.

Well, I don't know a any good sat solver written in haskell. But there are
plenty written in c/c++. One example is Satzoo which is pretty good:
http://www.cs.chalmers.se/~een/Satzoo/

I have a haskell binding to Satzoo if you're interested. Just mail me.

HTH,

/Josef
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Monad @ Microsoft

2003-10-31 Thread Josef Svenningsson

OK, this one is just too good not to post about it.

Microsoft has developed a now command line interface. Guess what they call
it? Read it here:
http://slashdot.org/articles/03/10/31/1346201.shtml?tid=185tid=190tid=201

Well, it seems Simoj PJ is doing a good job introducting functional
programming at MS although his methods are subtle in the extreme :-)

/Josef

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: fixed point

2003-10-28 Thread Josef Svenningsson

Sorry about replying to my own mail.

On Mon, 27 Oct 2003, Josef Svenningsson wrote:

 On Mon, 27 Oct 2003, Paul Hudak wrote:

  Thomas L. Bevan wrote:
Is there a simple transformation that can be applied to all
recursive functions to render them non-recursive with fix.
 
  Suppose you have a LET expression with a set of (possibly mutually
  recursive) equations such as:
 
  let f1 = e1
   f2 = e2
   ...
   fn = en
  in e
 
  The following is then equivalent to the above, assuming that g is not
  free in e or any of the ei:
 
  let (f1,...,fn) = fix g
   g ~(f1,...,fn) = (e1,...,en)
  in e
 
  Note that all recursion introduced by the top-most LET has been removed
  (or, if you will, concentrated into the one application of fix).  This
  transformation will work even if there is no recursion, and even if some
  of the fi are not functions (for example they could be recursively
  defined lazy data structures).
 
 This is a very nice technique. As an exercise to the reader I suggest the
 following program:

 \being{code}
 data Tree a = Branch a (Tree (a,a)) | Leaf

 cross f (a,b) = (f a,f b)

 main1 =
   let mapTree :: (a - b) - Tree a - Tree b
   mapTree = \f tree - case tree of
 Branch a t - Branch (f a) (mapTree (cross f) t)
 Leaf - Leaf
   in mapTree id (Branch 42 Leaf)
 \end{code}

I realise I was perhaps a bit dense in my previous mail. It was not my
intention to try to sound smart. Sorry for that.

Does anyone know how to apply the transformation suggested by Paul Hudak
to my program and make it typecheck? Does there exist a type system where
the transformed program typechecks? I suppose so but I don't quite know
what it would look like.

All the best,

/Josef

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: fixed point

2003-10-27 Thread Josef Svenningsson
On Mon, 27 Oct 2003, Paul Hudak wrote:

 Thomas L. Bevan wrote:
   Is there a simple transformation that can be applied to all
   recursive functions to render them non-recursive with fix.

 Suppose you have a LET expression with a set of (possibly mutually
 recursive) equations such as:

 let f1 = e1
  f2 = e2
  ...
  fn = en
 in e

 The following is then equivalent to the above, assuming that g is not
 free in e or any of the ei:

 let (f1,...,fn) = fix g
  g ~(f1,...,fn) = (e1,...,en)
 in e

 Note that all recursion introduced by the top-most LET has been removed
 (or, if you will, concentrated into the one application of fix).  This
 transformation will work even if there is no recursion, and even if some
 of the fi are not functions (for example they could be recursively
 defined lazy data structures).

This is a very nice technique. As an exercise to the reader I suggest the
following program:

\being{code}
data Tree a = Branch a (Tree (a,a)) | Leaf

cross f (a,b) = (f a,f b)

main1 =
  let mapTree :: (a - b) - Tree a - Tree b
  mapTree = \f tree - case tree of
Branch a t - Branch (f a) (mapTree (cross f) t)
Leaf - Leaf
  in mapTree id (Branch 42 Leaf)
\end{code]

/Josef

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: can a lazy language give fast code?

2002-07-30 Thread Josef Svenningsson

Hi,

On Mon, 29 Jul 2002, Scott J. wrote:

 Can one write withthe Haskell compliler faster code than in the
 examples of http://www.bagley.org/~doug/shootout/ where GHC (old
 Haskell 98?) seems to be much slower than Ocaml or Mlton both strict
 functional languages. Can one expect any improvements in speed in the
 future?

There have been speed improvements in the past. I recommend reading Urban
Boquists thesis where he describes a whole program Haskell compiler which
generates pretty fast code. The thesis is very readable and I recommend it
heartily to everyone with just the slightest interest in compiling lazy
languages.
It can be found here:

http://www.cs.chalmers.se/~boquist/

While we're on the subject there are a few things that I need to let out.

I think the reason why Haskell compilers aren't generating any faster code
is that there is a lack of competition among different compilers. And I
think that the lack of competition depends on that noone wants to write a
front-end to Haskell. It's simply too complicated and too much boring work
before one comes down to the interesting details. I know there is work on
creating standardised front-ends and this is a step in the right
direction. But the current state of affairs is the price we've had to pay
to have such a feature-rich language.

All the best,

/Josef

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Haskell problem

2002-02-21 Thread Josef Svenningsson

On Thu, 21 Feb 2002, Mark Wotton wrote:

 Hi,

 I'm trying out some combinatorial parsers, and I ran into a slightly
 inelegant construction. To parse a sequence of things, we have a function
 like

 pThen3 :: (a-b-c-d) - Parser a - Parser b - Parser c - Parser d
 pThen3 combine p1 p2 p3 toks =
 [(combine v1 v2 v3, toks3) | (v1, toks1) - p1 toks,
  (v2, toks2) - p2 toks1,
  (v3, toks3) - p3 toks2]

 The problem with this is that this structure has to be duplicated for
 pThen2, pThen4, and so on. These other forms are very similar to pThen3,
 but there seems to be no way to capture this in Haskell's type system, as
 the combine function has a different signature for each pThenX. (This is
 actually the first time the Haskell type system has got in my way rather
 than helping.) Is there a way around this problem?

Yes there is a way around this problem. You can use multi parameter type
classes to create (and give a type to) a function such as pThenX. The
details are worked out in the paper Faking it by Conor McBride. In that
paper shows how to implement a generic zipWith in Haskell. The same
technique should work for your function. The paper can be found on:
http://www.dur.ac.uk/~dcs1ctm/

Cheers,

/Josef

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe