Re: simultaneous ghc versions

2015-08-02 Thread Josef Svenningsson
On Fri, Jul 31, 2015 at 10:31 PM Erik de Castro Lopo mle...@mega-nerd.com
wrote:

 I maintaing multiple versions of GHC on all the machines I use regularly
 for Haskell development. I have:

 * ghc-7.6.3 installed under /usr/lib/ghc-7.6/
 * ghc-7.8.4 installed under /usr/lib/ghc-7.8/
 * ghc-7.10.2 installed under /usr/lib/ghc-7.10/

 To switch between versions all I need to do is modify $PATH
 to remove say /usr/lib/ghc-7.6/bin and add /usr/lib/ghc-7.10/bin.
 This lets me have two terminal window side by side with different
 versions of GHC.

 I actually have a shell function to to do this PATH mangling. I can
 document this more fully if anyone is interested.


I have a similar setup where I install different versions of GHC in
different directories. But I use GNU stow[1] to create and remove symlinks
in /usr/local/* so that I can use one version of GHC as the default.

[1] https://www.gnu.org/software/stow/

Josef
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


[Haskell] Call for Contributions: Haskell Implementors' Workshop 2015

2015-07-16 Thread Josef Svenningsson
Call for Contributions
   ACM SIGPLAN Haskell Implementors' Workshop

http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2015
Vancouver, Canada, 30 August, 2015

Co-located with ICFP 2015
   http://www.icfpconference.org/icfp2015/

Important dates
---
Proposal Deadline:  Monday,  3 August, 2015 (midnight, anywhere on earth)
Notification:   Monday, 10 August, 2015
Workshop:   Sunday, 30 August, 2015

The 7th Haskell Implementors' Workshop is to be held alongside ICFP
2015 this year in Vancouver. It is a forum for people involved in the
design and development of Haskell implementations, tools, libraries,
and supporting infrastructure, to share their work and discuss future
directions and collaborations with others.

Talks and/or demos are proposed by submitting an abstract, and
selected by a small program committee. There will be no published
proceedings; the workshop will be informal and interactive, with a
flexible timetable and plenty of room for ad-hoc discussion, demos,
and impromptu short talks.

Attendance figures clearly reflect the growth of the Haskell user
community and a constant interest in implementation aspects.

Scope and target audience
-

It is important to distinguish the Haskell Implementors' Workshop from
the Haskell Symposium which is also co-located with ICFP 2015. The
Haskell Symposium is for the publication of Haskell-related research. In
contrast, the Haskell Implementors' Workshop will have no proceedings --
although we will aim to make talk videos, slides and presented data
available with the consent of the speakers.

In the Haskell Implementors' Workshop, we hope to study the underlying
technology. We want to bring together anyone interested in the
nitty-gritty details behind turning plain-text source code into a
deployed product. Having said that, members of the wider Haskell
community are more than welcome to attend the workshop -- we need your
feedback to keep the Haskell ecosystem thriving.

The scope covers any of the following topics. There may be some topics
that people feel we've missed, so by all means submit a proposal even if
it doesn't fit exactly into one of these buckets:

  * Compilation techniques
  * Language features and extensions
  * Type system implementation
  * Concurrency and parallelism: language design and implementation
  * Performance, optimisation and benchmarking
  * Virtual machines and run-time systems
  * Libraries and tools for development or deployment

Talks
-

At this stage we would like to invite proposals from potential speakers
for talks and demonstrations. We are aiming for 20 minute talks with 10
minutes for questions and changeovers. We want to hear from people
writing compilers, tools, or libraries, people with cool ideas for
directions in which we should take the platform, proposals for new
features to be implemented, and half-baked crazy ideas. Please submit a
talk title and abstract of no more than 300 words.

Submissions should be made via HotCRP. The website is:
  https://icfp-hiw15.hotcrp.com/

We will also have a lightning talks session which will be organised on
the day. These talks will be 5-10 minutes, depending on available time.
Suggested topics for lightning talks are to present a single idea, a
work-in-progress project, a problem to intrigue and perplex Haskell
implementors, or simply to ask for feedback and collaborators.

Organisers
--

  * Richard Eisenberg  (University of Pennsylvania)
  * Edward Kmett   (SP Capital IQ)
  * Hans-Wolfgang Loidl(Heriot-Watt University)
  * Trevor L. McDonell   (Indiana University)
  * Deian Stefan   (Stanford University)
  * Josef Svenningsson - chair (Chalmers University of Technology)

Contact
---

  * Josef Svenningsson josef.svenningsson at chalmers.se
___
Haskell mailing list
Haskell@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell


Re: [Haskell-cafe] monadic DSL for compile-time parser generator, not possible?

2013-03-13 Thread Josef Svenningsson
Jeremy,

The problem you're trying to solve might seem tricky but it is in fact
quite solvable. In Feldspar[1] we use monads quite frequently and generate
code from them, in a similar fashion to what you're trying to do. We've
written a paper about how we do it[2] that I welcome you to read. If you
have any questions regarding the paper I'd be happy to try to answer them.
There are two parts to the trick. One is to use the continuation monad to
get a monad instance. The other trick is to restrict any functions you have
in your data type (like FMap in your example) so that they can be reified
into something that can be compiled, which would be Template Haskell in
your case.

To help you along the way I've prepared some code to give you an idea of
how this can be done. This code only shows the continuation monad trick but
I hope it's useful nonetheless.

\begin{code}
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
{-# LANGUAGE RankNTypes #-}
{-# LANGUAGE GADTs #-}
module MonadReify where

newtype Cont r a = C { unC :: (a - r) - r }

instance Monad (Cont r) where
  return a = C $ \k - k a
  m = f  = C $ \k - unC m (\a - unC (f a) k)

data ParserSpec a where
  AnyChar :: ParserSpec Char
  Return  :: a - ParserSpec a
  Join:: ParserSpec (ParserSpec a) - ParserSpec a
  FMap:: (a - b) - ParserSpec a - ParserSpec b

bindSpec p f = Join (FMap f p)

newtype Parser a = P { unP :: forall r. Cont (ParserSpec r) a }

instance Monad Parser where
  return a = P (return a)
  m = f  = P (unP m = \a - unP (f a))

anyChar :: Parser Char
anyChar = P (C $ \k - bindSpec AnyChar k)

reifyParser :: Parser a - (forall r. ParserSpec r - b) - b
reifyParser (P (C f)) g = g (f (\a - Return a))
\end{code}

Cheers,

Josef

[1]https://github.com/Feldspar/feldspar-language
[2]http://www.cse.chalmers.se/~josefs/publications/paper21_cameraready.pdf

On Tue, Mar 12, 2013 at 9:06 PM, Jeremy Shaw jer...@n-heptane.com wrote:

 It would be pretty damn cool if you could create a data type for
 generically describing a monadic parser, and then use template haskell
 to generate a concrete parser from that data type. That would allow
 you to create your specification in a generic way and then target
 different parsers like parsec, attoparsec, etc. There is some code
 coming up in a few paragraphs that should describe this idea more
 clearly.

 I would like to suggest that while it would be cool, it is
 impossible. As proof, I will attempt to create a simple monadic parser
 that has only one combinator:

 anyChar :: ParserSpec Char

 We need the GADTs extension and some imports:

  {-# LANGUAGE GADTs, TemplateHaskell #-}
  import Control.Monad  (join)
  import qualified Text.Parsec.Char as P
  import Language.Haskell.TH(ExpQ, appE)
  import Language.Haskell.TH.Syntax (Lift(lift))
  import Text.Parsec(parseTest)
  import qualified Text.Parsec.Char as P
  import Text.Parsec.String (Parser)

 Next we define a type that has a constructor for each of the different
 combinators we want to support, plus constructors for the functor and
 monad methods:

  data ParserSpec a where
  AnyChar :: ParserSpec Char
  Return  :: a - ParserSpec a
  Join:: ParserSpec (ParserSpec a) - ParserSpec a
  FMap:: (a - b) - ParserSpec a - ParserSpec b
 
  instance Lift (ParserSpec a) where
  lift _ = error not defined because we are screwed later anyway.

 In theory, we would extend that type with things like `Many`, `Some`,
 `Choice`, etc.

 In Haskell, we are used to seeing a `Monad` defined in terms of
 `return` and `=`. But, we can also define a monad in terms of
 `fmap`, `return` and `join`. We will do that in `ParserSpec`, because
 it makes the fatal flaw more obvious.

 Now we can define the `Functor` and `Monad` instances:

  instance Functor ParserSpec where
  fmap f p = FMap f p

  instance Monad ParserSpec where
  return a = Return a
  m = f  = Join ((FMap f) m)

 and the `anyChar` combinator:

  anyChar :: ParserSpec Char
  anyChar = AnyChar

 And now we can define a simple parser that parses two characters and
 returns them:

  charPair :: ParserSpec (Char, Char)
  charPair =
  do a - anyChar
 b - anyChar
 return (a, b)

 Now, we just need to define a template haskell function that generates
 a `Parser` from a `ParserSpec`:

  genParsec :: (Lift a) = ParserSpec a - ExpQ
  genParsec AnyChar= [| anyChar |]
  genParsec (Return a) = [| return a |]
  genParsec (Join p)   = genParsec p
  -- genParsec (FMap f p) = appE [| f |] (genParsec p) -- uh-oh

 Looking at the `FMap` case we see the fatal flaw. In order to
 generate the parser we would need some way to transform any arbitrary
 Haskell function of type `a - b` into Template Haskell. Obviously,
 that is impossible (for some definition of obvious).

 Therefore, we can assume that it is not possible to use Template
 Haskell to generate a monadic parser from a monadic specification.

 We can also assume 

[Haskell-cafe] Problems with code blocks in the description field in a .cabal file

2013-02-04 Thread Josef Svenningsson
Hi,

I'm putting together a cabal package and I'd like to have some code
examples in my description file. In particular I would like to have a code
block containing markdown containing a code block of Haskell, like this:

 ~~~{ .haskell }
 module Main where

 main = putStrLn Hello World!
 ~~~

When I put the above code in my .cabal file and do `cabal haddock
--executables` I get the following error:

haddock: failed to parse haddock prologue from file:
dist/doc/html/codeExtract/codeExtract/haddock-prolog31969.txt

In general I can provoke the parse error from haddock whenever I have
something inside curly braces.

So the error seems to stem from haddock. I've tried to track down what
happens inside haddock but I've run out steam. I'd like to know if there is
anything I can do to be able to write something like the above as part of
the description in my .cabal file.

Thanks,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problems with code blocks in the description field in a .cabal file

2013-02-04 Thread Josef Svenningsson
On Mon, Feb 4, 2013 at 1:26 PM, Mateusz Kowalczyk
fuuze...@fuuzetsu.co.ukwrote:

 I don't understand why you're putting it in your .cabal file. Isn't
 something like 3.8.5 over at [1] what you're trying to achieve?

 Right, I probably should have mentioned that the reason I put it in the
.cabal file is that the cabal package I'm putting together is an
executable. So I want the documentation to show up on the index page.


 ...

 I had a look at a package ([2]) that I know uses a multi-line code block
 example. Here's what I found in its cabal file:
 An example:
 .
  runShpider $ do
   download http://apage.com;
   theForm : _ - getFormsByAction http://anotherpage.com;
   sendForm $ fillOutForm theForm $ pairs $ do
 occupation =: unemployed Haskell programmer
 location =: mother's house
 .

 Depending on your mail client, the `' signs might get quoted.

 Yes. I have no problems including code in general. It's when I have things
inside curly braces that I get a parse error. And that's exactly what I
would like to have.

Thanks,

Josef



 [1] - http://www.haskell.org/haddock/doc/html/ch03s08.html
 [2] -

 http://hackage.haskell.org/packages/archive/shpider/0.2.1.1/doc/html/Network-Shpider.html

 On 04/02/13 12:30, Josef Svenningsson wrote:
  Hi,
 
  I'm putting together a cabal package and I'd like to have some code
  examples in my description file. In particular I would like to have a
  code block containing markdown containing a code block of Haskell, like
  this:
 
  ~~~{ .haskell }
  module Main where
 
  main = putStrLn Hello World!
  ~~~
 
  When I put the above code in my .cabal file and do `cabal haddock
  --executables` I get the following error:
 
  haddock: failed to parse haddock prologue from file:
  dist/doc/html/codeExtract/codeExtract/haddock-prolog31969.txt
 
  In general I can provoke the parse error from haddock whenever I have
  something inside curly braces.
 
  So the error seems to stem from haddock. I've tried to track down what
  happens inside haddock but I've run out steam. I'd like to know if there
  is anything I can do to be able to write something like the above as
  part of the description in my .cabal file.
 
  Thanks,
 
  Josef
 
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problems with code blocks in the description field in a .cabal file

2013-02-04 Thread Josef Svenningsson
Hi Ryan,

As far as I can tell I'm following the Haddock formatting just fine. I'm
using bird tracks for my code block and according to the Haddock manual
those code blocks are interpreted literally without any additional markup.
 To me that suggests that I should be able to write just about anything in
these code blocks. But that's evidently not so.

The documentation for the lens package is indeed impressive. But it doesn't
help me with my particular conundrum of the curly braces.

Thanks,

Josef

On Mon, Feb 4, 2013 at 2:01 PM, Ryan Yates fryguy...@gmail.com wrote:

 Hi Josef,

 You should be fine if you follow Haddock formatting.  For example:

 http://hackage.haskell.org/package/lens

 Is from the cabal file:

 http://hackage.haskell.org/packages/archive/lens/3.8.5/lens.cabal


 Ryan


 On Mon, Feb 4, 2013 at 7:30 AM, Josef Svenningsson 
 josef.svennings...@gmail.com wrote:

 Hi,

 I'm putting together a cabal package and I'd like to have some code
 examples in my description file. In particular I would like to have a code
 block containing markdown containing a code block of Haskell, like this:

  ~~~{ .haskell }
  module Main where
 
  main = putStrLn Hello World!
  ~~~

 When I put the above code in my .cabal file and do `cabal haddock
 --executables` I get the following error:

 haddock: failed to parse haddock prologue from file:
 dist/doc/html/codeExtract/codeExtract/haddock-prolog31969.txt

 In general I can provoke the parse error from haddock whenever I have
 something inside curly braces.

 So the error seems to stem from haddock. I've tried to track down what
 happens inside haddock but I've run out steam. I'd like to know if there is
 anything I can do to be able to write something like the above as part of
 the description in my .cabal file.

 Thanks,

 Josef

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problems with code blocks in the description field in a .cabal file

2013-02-04 Thread Josef Svenningsson
On Mon, Feb 4, 2013 at 5:10 PM, Ryan Yates fryguy...@gmail.com wrote:

 Hi Josef,

 Sorry, I misunderstood the intent.  From what I can tell this is a
 Cabal deficiency.  The text from the description field first passes through
 the `tokeniseLine` function from here:
 https://github.com/haskell/cabal/blob/cabal-1.16/Cabal/Distribution/ParseUtils.hs#L428which
  seems to have no provision for escaping.

 Indeed. Bummer. And I notice now that there's already a bug filed in the
tracker about this:
https://github.com/haskell/cabal/issues/885

I guess I'll have to do without the example in my documentation for now.

Thank you very much!

Josef


 Ryan


 On Mon, Feb 4, 2013 at 8:37 AM, Josef Svenningsson 
 josef.svennings...@gmail.com wrote:

 Hi Ryan,

 As far as I can tell I'm following the Haddock formatting just fine. I'm
 using bird tracks for my code block and according to the Haddock manual
 those code blocks are interpreted literally without any additional markup.
  To me that suggests that I should be able to write just about anything in
 these code blocks. But that's evidently not so.

 The documentation for the lens package is indeed impressive. But it
 doesn't help me with my particular conundrum of the curly braces.

 Thanks,

 Josef

 On Mon, Feb 4, 2013 at 2:01 PM, Ryan Yates fryguy...@gmail.com wrote:

 Hi Josef,

 You should be fine if you follow Haddock formatting.  For example:

 http://hackage.haskell.org/package/lens

 Is from the cabal file:

 http://hackage.haskell.org/packages/archive/lens/3.8.5/lens.cabal


 Ryan


 On Mon, Feb 4, 2013 at 7:30 AM, Josef Svenningsson 
 josef.svennings...@gmail.com wrote:

 Hi,

 I'm putting together a cabal package and I'd like to have some code
 examples in my description file. In particular I would like to have a code
 block containing markdown containing a code block of Haskell, like this:

  ~~~{ .haskell }
  module Main where
 
  main = putStrLn Hello World!
  ~~~

 When I put the above code in my .cabal file and do `cabal haddock
 --executables` I get the following error:

 haddock: failed to parse haddock prologue from file:
 dist/doc/html/codeExtract/codeExtract/haddock-prolog31969.txt

 In general I can provoke the parse error from haddock whenever I have
 something inside curly braces.

 So the error seems to stem from haddock. I've tried to track down what
 happens inside haddock but I've run out steam. I'd like to know if there is
 anything I can do to be able to write something like the above as part of
 the description in my .cabal file.

 Thanks,

 Josef

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] smt solver bindings

2011-12-15 Thread Josef Svenningsson
On Thu, Dec 15, 2011 at 7:04 PM, Dimitrios Vytiniotis 
dimit...@microsoft.com wrote:


 I've a quick question:

 Are there Haskell wrappers for the Z3 C API around?

 I believe sbv recently got support for Z3 but I don't know if it uses the
C API. Neither have I tried the Z3 backend, I only played with the Yices
backend. If you contact Levent Erkök, the author of sbv, he should be able
to give you more information.

 https://github.com/LeventErkok/sbv

Thanks,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Mysterious complaint about .hi files

2011-06-07 Thread Josef Svenningsson
Hi cafe!

I'm hitting a very strange problem when using haskell-src-exts and
haskell-src-exts-qq. Consider the following module:

\begin{code}
{-# Language QuasiQuotes #-}
module TestBug where

import Language.Haskell.Exts
import Language.Haskell.Exts.QQ

unit = TyTuple Boxed []

ty = [dec| quux :: (a,b) |]
\end{code}

This module doesn't load for me using ghc 7.0.3. I've pasted the full error
message at the end of this email but the error message begins with the
following lines:

TestBug.hs:11:11:
Can't find interface-file declaration for variable
Language.Haskell.Exts.Syntax.Boxed
  Probable cause: bug in .hi-boot file, or inconsistent .hi file
  Use -ddump-if-trace to get an idea of which file caused the error

Using -ddump-if-trace didn't help me much.

The funny thing is that if I comment out the last line (the definition of
'ty') then the module loads just fine even though it uses the Boxed type in
the definition of 'unit'. So the problem only manifests itself when I use
tuples from haskell-src-exts-qq. Everything else that I've used from
haskell-src-exts-qq works fine, it's just when I try to use tuples that
things go haywire.

I've tried to remove the packages and reinstall them but it didn't help.

Any clues?

Josef

TestBug.hs:11:11:
Can't find interface-file declaration for variable
Language.Haskell.Exts.Syntax.Boxed
  Probable cause: bug in .hi-boot file, or inconsistent .hi file
  Use -ddump-if-trace to get an idea of which file caused the error
In the first argument of `Language.Haskell.Exts.Syntax.TyTuple', namely
  `Language.Haskell.Exts.Syntax.Boxed'
In the third argument of `Language.Haskell.Exts.Syntax.TypeSig', namely
  `Language.Haskell.Exts.Syntax.TyTuple
 Language.Haskell.Exts.Syntax.Boxed
 ((:)
(Language.Haskell.Exts.Syntax.TyVar
   (Language.Haskell.Exts.Syntax.Ident ((:) 'a' [])))
((:)
   (Language.Haskell.Exts.Syntax.TyVar
  (Language.Haskell.Exts.Syntax.Ident ((:) 'b' [])))
   []))'
In the expression:
  Language.Haskell.Exts.Syntax.TypeSig
(SrcLoc
   ((:)
  ''
  ((:)
 'u'
 ((:)
'n'
((:)
   'k'
   ((:)
  'n'
  ((:)
 'o'
 ((:)
'w' ((:) 'n' ((:) '' ((:) '.' ((:) 'h' ((:)
's' []
   1
   2)
((:)
   (Language.Haskell.Exts.Syntax.Ident
  ((:) 'q' ((:) 'u' ((:) 'u' ((:) 'x' [])
   [])
(Language.Haskell.Exts.Syntax.TyTuple
   Language.Haskell.Exts.Syntax.Boxed
   ((:)
  (Language.Haskell.Exts.Syntax.TyVar
 (Language.Haskell.Exts.Syntax.Ident ((:) 'a' [])))
  ((:)
 (Language.Haskell.Exts.Syntax.TyVar
(Language.Haskell.Exts.Syntax.Ident ((:) 'b' [])))
 [])))
Failed, modules loaded: none.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Wanted: composoable parsers from haskell-src-exts

2011-03-18 Thread Josef Svenningsson
On Thu, Mar 17, 2011 at 10:58 PM, Niklas Broberg
niklas.brob...@gmail.comwrote:

I already export a partial parser for top-of-file pragmas,
 
  I see. What I don't see is how such a parser would return the rest of
 input.

 Hmm. I see. And I see that you are correct in not seeing it, since it
 appears it cannot be done with Happy, which I believed. It would then be
 down to the parser/lexer to pass on unconsumed input, which seems a daunting
 and disruptive task, seeing how the lexer typically would tokenize some
 input in advance before the parser discovers that it cannot consume what has
 been lexed... Hmm.


I think this is actually doable, although not necessarily easy, using a
technique due to Oleg. He has used delimited continuations to take ordinary
parsers and make them incremental. Dan Doel has experimented with applying
the technique to Parsec parsers with some success. I think choosing the
right parser monad in Happy can make this work.

Reference to Oleg's technique:
http://okmij.org/ftp/continuations/differentiating-parsers.html

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Safer common subexpression elimination

2010-11-26 Thread Josef Svenningsson
On Thu, Nov 25, 2010 at 11:32 AM, Joachim Breitner m...@joachim-breitner.de
 wrote:

 Hi,

 although semantically it could, ghc does not do common subexpression
 elimination (CSE), at least not in every case. The canonical example
 where it would be bad is the function (with div to avoid numerical
 casting):

  avg :: [Int] - Int
  avg n = (sum [0..n] `div` length [0..n])

 The reason why [0..n] is not shared is because then it would be kept in
 memory completely after the evaluation of sum, because length will still
 need it, possibly overflowing the memory. In the non-shared form,
 though, the garbage collector can clean up directly behind sum and
 length, and the program runs in constant space.

 Note that the shared expression would be very large compared to the
 thunks.

 Now consider the program:

  avg' :: [Int] - (Int, Int)
  avg' n = (sum [0..n] `div` length [0..n], length [0..n])

 I’d say that CSE to
  avg n = (sum [0..n] `div` l, l)
where l = length [0..n]
 is safe, because the shared expression (an Int) is smaller than the
 thunks and the compiler can tell.

 So I wonder:
  * Is sharing values of type Int (and Bool and similar small values)
 always safe?
  * If so: does GHC already do that?
  * Would it be technically possible?
  * Is there an established theory that can tell, for a sharing
 candidate, whether it is safe to share it?

 I'm just going to answer your last question.

Jörgen Gustavsson and David Sands developed the theory of space improvement
for call-by-need. That can help you answer the other questions you have. But
that being said, it's fairly complicated stuff, and it's not a very easy to
use theory. I think it's inherent in the problem though, the
space behavior of lazy programs is just weird. If you're curious to read
about space improvement see papers [GS01] and [GS99] on the following page:
http://www.cse.chalmers.se/~dave/davewww.html

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Wadler space leak

2010-11-09 Thread Josef Svenningsson
Let me clarify a bit exactly how Gustavsson and Sands (I'll refer to them as
GS) handled the issue of the Wadler space leak. It's true that they adopted
an approach similar to Sparud in that they extended their core calculus with
a new language construct which could solve the problem. This is contrast to
Wadler who changed the garbage collector instead, something that GS said
would lead to bad behavior in their calculus.
BUT, GS did not adopt exactly the construct that Sparud suggested. Sparud's
suggestion was to add an updatePat primitive to the language. This was
inspired by how the G-machine work, it had update instructions which where
typically executed after a value was computed. It's a rather bad fit for the
STG-machine which pushes update markers on the stack whenever it starts to
evaluate a thunk. Updates are performed whenever there is an update marker
on the stack when it has computed something to WHNF.
The language construct that GS chose was to have pattern bindings as
primitive in the language. So the code snippet below (taken from Jörgen's
thesis) would be a valid core program. It would not be desugared into case
expressions.
~~~
let (ps,qs) = split ys
in (y:ps,qs)
~~~
The semantics of pattern bindings involves a new kind of update marker
which, in the example above, will update both ps and qs, whenever the 'split
ys' is computed to WHNF. This neatly solves the space leak problem. And it
is a much closer fit to the STG-machine in that uses update markers on the
stack to coordinate the graph reduction.

I think the solution GS chose should work much better for GHC than Sparud's
suggestion. But it would no doubt be an invasive change to GHC as Core would
have to be changed to support pattern bindings.

Cheers,

Josef

On Tue, Nov 9, 2010 at 8:58 AM, Duncan Coutts
duncan.cou...@googlemail.comwrote:

 On 8 November 2010 13:28, Simon Marlow marlo...@gmail.com wrote:
 
  There's another approach in Jan Sparud's paper here:
 
  http://portal.acm.org/citation.cfm?id=165196
 
  although it's not clear that this interacts very well with inlining
 either,
  and it has a suspicious-looking side-effecting operation.  It also looks
  like it creates a circular reference between the thunk and the selectors,
  which might hinder optimisations, and would probably also make things
 slower
  (by adding extra free variables to the thunk).

 This proposal is mentioned favourably by Jörgen Gustavsson David Sands
 in [1] (see section 6, case study 6). They mention that there is a
 formalisation in Gustavsson's thesis [2]. That may say something about
 inlining, since that's just the kind of transformation they'd want to
 show is a space improvement.

 [1]: Possibilities and Limitations of Call-by-Need Space Improvement (2001)
  http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.8.4097

 [2]: Space-Safe Transformations and Usage Analysis for Call-by-Need
 Languages (2001)
  (which I cannot immediately find online)

 Duncan
 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Compiling a DSL on the shoulders of GHC

2010-10-18 Thread Josef Svenningsson
Fiddling with GHC internals sounds like overkill for this project.

Are you really sure you need a timeout to run the Haskell metaprogram? There
are many implementations of EDSLs which take the approach that you want to
take by using Haskell to create a syntax tree and the offshore it to some
backend compiler. None of them uses a timeout.

But in case you really insist on a timeout I would recommend using a wrapper
function on the toplevel of your metaprogram which implements the timeout.
That way you don't have to risk your sanity by having to dig around in GHC.

Josef

On Sun, Oct 17, 2010 at 9:53 PM, Patai Gergely patai_gerg...@fastmail.fmwrote:

  Not sure how this fits into what I thought you were saying.  Are you
  trying to use Haskell to build an AST, use GHC to optimize it, and
  then spit it out and compile it with, say, a OCaml program that you
  have already written?
 Yes, that would be the basic idea:

 1. Compile the Haskell metaprogram.
 2. Evaluate main, possibly with a timeout, in a way that keeps all its
 structure including lambdas accessible (e.g. Core).
 3. Compile the resulting program with other tools.

  What is this different tool and how does it fit in to your pipeline?
 This tool(set) is a specialised compiler for some low-level target
 platform (FPGA, DSP, GPU - again, no clear decision yet), and it is the
 second half of the pipeline after the GHC phases.

 Gergely

 --
 http://www.fastmail.fm - The professional email service

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Higher-order algorithms

2010-08-24 Thread Josef Svenningsson
On Mon, Aug 23, 2010 at 6:10 PM, Max Rabkin max.rab...@gmail.com wrote:

 (Accidentally sent off-list, resending)

 On Mon, Aug 23, 2010 at 15:03, Eugene Kirpichov ekirpic...@gmail.com
 wrote:
  * Difference lists

  I mean that not only higher-order facilities are used, but the essence
  of the algorithm is some non-trivial higher-order manipulation.

 If I'm not mistaken, we can defunctionalize difference lists like this:

 data DList a = Chunk [a] | Concat (DList a) (DList a)

 fromList = Chunk
 () = Concat
 singleton = Chunk . (:[])
 empty = Chunk []

 toList dl = dl `fold` []
  where
infixr `fold`
fold :: DList a - [a] - [a]
fold (Concat l r) ys = l `fold` r `fold` ys
fold (Chunk xs) ys = xs ++ ys

 (This implementation has only been lightly tested)

 And of course, we knew this was possible, since we can compile DLists
 to first-order machines.

 I agree that the functional, higher-order presentation is clear and
 elegant. But is it essential?

 It's true that any higher-order program can be defunctionalized (or
closure-converted) to a first order program. But defunctionalization is a
whole program transformation and in general we might lose compositionality
when applying it to a library. In your case above with difference lists
there is no change in the interface since it is first order. But if you try
to defunctionalize a monad then you will have to defunctionalize the second
argument to the bind function and all of a sudden you cannot use the bind
function as freely as before.


 Also, I'm curious about how this performs relative to the functional
 version.

 In my small experiments with defunctionalization there is not much
difference between a higher order program and its defunctionalized version.
I used GHC in those experiments.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] feasability of implementing an awk interpreter.

2010-08-20 Thread Josef Svenningsson
On Fri, Aug 20, 2010 at 6:05 AM, Jason Dagit da...@codersbase.com wrote:



 On Thu, Aug 19, 2010 at 8:05 PM, Michael Litchard mich...@schmong.orgwrote:

 I'd like the community to give me feedback on the difficulty level of
 implementing an awk interpreter. What language features would be
 required? Specifically I'm hoping that TH is not necessary because I'm
 nowhere near that skill level.


 Implementing an awk interpreter in Haskell can be a fun project. I have a
half finished implementation lying around on the hard drive. It's perfectly
possible to implement it without using any super fancy language features.
But as other people have pointed out, monads are helpful for dealing with a
lot of the plumbing in the interpreter.

An outline of a possible approach would be appreciated. I am using
 http://www.math.utah.edu/docs/info/gawk_toc.html
 as a guide to the language description.


 You might also focus on the 'core' of awk.  Think about, what is the
 minimal language and start from there.  Grow your implementation adding
 features bit by bit.  It's also a good opportunity to do testing.  You have
 a reference implementation and so you can write lots of tests for each
 feature as you add them.

 When I wrote my awk interpreter I decided to go for the whole language from
start. I had reasons for doing this as there were certain aspects of this
that I wanted to capture but it is not they way I would recommend going
about it. I definitely second Jason's advice at trying to capture the core
functionality first.

Have fun,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] derivable type classes

2010-03-23 Thread Josef Svenningsson
On Tue, Mar 23, 2010 at 11:52 AM, Ozgur Akgun ozgurak...@gmail.com wrote:
 Can a user define a derivable type class of her own? If yes, how?

GHC has a feature which lets you define classes such that making an
instance of them is as easy as deriving. It's called Generic classes.
See GHC's documentation for the details:
http://www.haskell.org/ghc/docs/latest/html/users_guide/generic-classes.html

Hth,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: I accidentally the Prelude

2010-03-02 Thread Josef Svenningsson
On Mon, Mar 1, 2010 at 11:54 PM, Jeremy Shaw jer...@n-heptane.com wrote:
 is there, by chance, a file named Prelude.hs in the working directory? (the
 directory you are in when you type ghci?)
 - jeremy

Ah. Thanks! That was indeed the problem.

Though I think ghci:s response could be a little bit more transparent.

Josef

 On Mon, Mar 1, 2010 at 11:43 AM, Josef Svenningsson
 josef.svennings...@gmail.com wrote:

 Hi,

 It seems I've been able to mess up my ghc installation pretty badly.
 Here is what happens if I just try to invoke ghci from the prompt:

 $ ghci
 GHCi, version 6.10.4: http://www.haskell.org/ghc/  :? for help
 Loading package ghc-prim ... linking ... done.
 Loading package integer ... linking ... done.
 Loading package base ... linking ... done.
 command line: module `Prelude' is not loaded
 $

 I have no idea what I did to end up in this situation. What I've been
 doing lately is reinstalling some packages. I also have another ghc
 installed but it's at a completely different place in the file system.
 The only thing I can think of is if cabal managed to somehow confuse
 the two ghcs and wrote some data in the wrong place.

 What I really would like to know is if there is a simple way to fix
 this without completely reinstalling ghc with all the libraries I have
 installed. Has anyone else experienced anything similar?

 If this is a potential bug I'd be happy to provide any data that might
 help track it down.

 Cheers,

 Josef
 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: I accidentally the Prelude

2010-03-02 Thread Josef Svenningsson
On Tue, Mar 2, 2010 at 12:21 PM, Simon Marlow marlo...@gmail.com wrote:
 On 02/03/2010 08:59, Josef Svenningsson wrote:

 On Mon, Mar 1, 2010 at 11:54 PM, Jeremy Shawjer...@n-heptane.com  wrote:

 is there, by chance, a file named Prelude.hs in the working directory?
 (the
 directory you are in when you type ghci?)
 - jeremy

 Ah. Thanks! That was indeed the problem.

 Though I think ghci:s response could be a little bit more transparent.

 Sure, how about this:

 $ touch Prelude.hs
 $ ghci
 GHCi, version 6.12.1: http://www.haskell.org/ghc/  :? for help
 Loading package ghc-prim ... linking ... done.
 Loading package integer-gmp ... linking ... done.
 Loading package base ... linking ... done.
 Loading package ffi-1.0 ... linking ... done.
 Prelude

 ie. with 6.12.1 it just works.

Brilliant! Thanks.

Josef
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


I accidentally the Prelude

2010-03-01 Thread Josef Svenningsson
Hi,

It seems I've been able to mess up my ghc installation pretty badly.
Here is what happens if I just try to invoke ghci from the prompt:

$ ghci
GHCi, version 6.10.4: http://www.haskell.org/ghc/  :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer ... linking ... done.
Loading package base ... linking ... done.
command line: module `Prelude' is not loaded
$

I have no idea what I did to end up in this situation. What I've been
doing lately is reinstalling some packages. I also have another ghc
installed but it's at a completely different place in the file system.
The only thing I can think of is if cabal managed to somehow confuse
the two ghcs and wrote some data in the wrong place.

What I really would like to know is if there is a simple way to fix
this without completely reinstalling ghc with all the libraries I have
installed. Has anyone else experienced anything similar?

If this is a potential bug I'd be happy to provide any data that might
help track it down.

Cheers,

Josef
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] ** for nested applicative functors?

2009-10-12 Thread Josef Svenningsson
On Mon, Oct 12, 2009 at 6:22 PM, Kim-Ee Yeoh a.biurvo...@asuhan.com wrote:

 Does anyone know if it's possible to write the following:

 ** :: (Applicative m, Applicative n) =
 m (n (a-b)) - m (n a) - m (n b)

 Clearly, if m and n were monads, it would be trivial.

 Rereading the original paper, I didn't see much discussion
 about such nested app. functors.

 Any help appreciated.

How about

m ** n = pure (*) * m * n

Hth,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] types and braces

2009-04-17 Thread Josef Svenningsson
Conor,

I'd like to point out a few things that may help you on the way.

On Wed, Apr 15, 2009 at 8:58 PM, Conor McBride
co...@strictlypositive.org wrote:
 I don't immediately see what the clash in that context would be - I
 *think* what you propose should be doable. I'd be interested to know
 what you come up with, or I might have a look at it myself when I find
 a few minutes to spare.

 I've found that I can add a production

 atype :: { Type }
  ...
  | '{' trueexp '}'

 if I remove the productions for record declarations

 constr1 :: { ConDecl }
      | con '{' '}'                   { RecDecl $1 [] }
      | con '{' fielddecls '}'        { RecDecl $1 (reverse $3) }

 which suggests that it is indeed the syntax

  data Moo = Foo {goo :: Boo Hoo}

 which is in apparent conflict with my proposed extension.
 The current parser uses the type parser btype to parse the
 initial segment of constructor declarations, so my change
 causes trouble.

 Further trouble is in store from infix constructors

  data Noo = Foo {True} :*: Foo {False}

 should make sense, but you have to look quite far to
 distinguish that from a record.

 So I don't see that my proposed extension introduces a
 genuine ambiguity, but it does make the parser a bit
 knottier.

Remember that even though your parser in unambiguous that doesn't mean
that happy will be able to handle it. Happy deals with LALR grammars
and you have to confine yourself to that restriction in order to make
happy happy. Also, your example above suggests that your grammar might
require an infinite lookahead, something which happy doesn't deal
with.

Having said all this, there is a magic flag which you can give to
happy which will make all these headaches go away. The incantation is
--glr which makes happy produce generalized lr parsers which can deal
even with ambiguous grammars. I've never used this myself so I can't
give you any further advice than to point you in the general
direction. The happy manual is your friend.

Happy happy hacking.

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: pqueue-mtl, stateful-mtl

2009-02-16 Thread Josef Svenningsson
On Mon, Feb 16, 2009 at 2:30 AM, wren ng thornton w...@freegeek.org wrote:
 Louis Wasserman wrote:

 I follow.  The primary issue, I'm sort of wildly inferring, is that use of
 STT -- despite being pretty much a State monad on the inside -- allows
 access to things like mutable references?

 That's exactly the problem. The essential reason for ST's existence are
 STRefs which allow mutability.

I'd like to point out one other thing that ST provides, which is often
forgotten. It provides *polymorphic* references. That is, we can
create new references of any type.

So ST is a really magical beast. Not only does it provide mutation, it
also provides mutable references. And these are two different
features. Now, everyone agrees that mutation is not something that you
can implement in a functional language, so ST cannot be implemented in
Haskell for that reason. It has to be given as a primitive. But what
about polymorphic references? Can they be implemented in Haskell? The
Claessen conjecture (after Koen Claessen) is that they cannot be
implemented in Haskell. See the following email for more details:
http://www.haskell.org/pipermail/haskell/2001-September/007922.html

One could try and separate mutation and polymorphic references and
give them as two different primitives and implement ST on top of that.
But I haven't seen anyone actually trying that (or needing it for that
matter).

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: ST monad and monad tranformers

2009-02-02 Thread Josef Svenningsson
On Mon, Feb 2, 2009 at 8:50 PM, Reid Barton rwbar...@math.harvard.edu wrote:
 On Mon, Feb 02, 2009 at 06:03:15PM +0100, Josef Svenningsson wrote:
 Hi Tyson,

 I also needed something like this a while ago so I knocked up a really
 simple module and put it on hackage:
 http://hackage.haskell.org/cgi-bin/hackage-scripts/package/STMonadTrans

 Warning!  The STMonadTrans package uses State# nonlinearly, and as a
 result, can violate referential transparency:

Indeed, thanks for pointing this out. I really should have a warning
sign on the package saying that it only works for certain monads.

Cheers,

/Josef
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ST monad and monad tranformers

2009-02-02 Thread Josef Svenningsson
Hi Tyson,

I also needed something like this a while ago so I knocked up a really
simple module and put it on hackage:
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/STMonadTrans

If you have any suggestions for improvement they are most welcome.
Patches even more so.

Josef

2009/2/2 Tyson Whitehead twhiteh...@gmail.com:
 I have a situation in which I believe I need a parameterizable version of the
 strict ST monad.  My computation type is StateT s' (STT s (ErrorT e m)) a
 (i.e., fails or succeeds and has an internal state involving a state thread).

 The STT type above is a version of ST like the ReaderT, StateT, etc. types.

 newtype STT s m a = STT ( State# s - m (STTBox s a) )
 data STTBox s a = STTBox {-#UNPACK#-} !(State# s) {-#UNPACK#-} !a

 (I'm guessing on the UNPACK paragmas here) with

 runSTT :: (Monad m) = (forall s. STT s m a) - m a
 runSTT m = case m of STT m' - do STTBox _ x - m' realWorld#
  return x

 (writing this as runSTT (STT m') = ... doesn't typecheck with ghc 6.8.2)

 instance Monad m = Monad (STT s m) where
return x = STT $ \s - return $ STTBox s x
(STT m) = k = STT $ \s - do STTBox s' x - m s
   case k x of STT k' - k' s'

 plus all the assorted instances for Functor, MonadPlus, MonadFix, MonadTrans,
 MonadReader, MonadState, etc.  For example,

 instance MonadWriter w m = MonadWriter w (STT s m) where
tell = lift . tell
listen (STT m) = STT $ \s - do (STTBox s' x,w) - listen $ m s
return $ STTBox s' (x,w)
pass   (STT m) = STT $ \s - pass $ do STTBox s' (x,f) - m s
   return (STTBox s' x,f)

 I was looking for any comments, wondering if there is a reason for this not
 existing in the library already, and what I should do in terms of paragmas and
 such for speed?  I see the GHC-ST file has a mix of INLINE and NOINLINE.

 http://www.haskell.org/ghc/dist/current/docs/libraries/base/src/GHC-ST.html

 In particular, return, =, , and runST are marked INLINE, but there is a
 regrettably delicate comment that goes with the runST method.  Also, what
 about the Functor, MonadPlus, MonadFix, MonadTrans, MonadReader, etc. methods?

 Thanks! -Tyson

 PS:  I would be happy to provide the whole works to be added to the library if
 it is something that should be there.

 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Haskell project proposals reddit

2008-12-10 Thread Josef Svenningsson
On Wed, Dec 10, 2008 at 5:34 AM, Don Stewart [EMAIL PROTECTED] wrote:
 I'd like to echo Jason's remarks earlier.

http://www.reddit.com/r/haskell_proposals/

 We've tried for a couple of years now to efficiently track 'wanted
 libraries' for the community, but never with much success.

 In particular, two approaches have been tried:

* a wiki page
* the 200{6,7,8} summer of code trac

 neither proved workable long term, likely because no one knew about
 them, they're harder to contribute to and other factors.

I think this is a wonderful initiative, but I can't shake the feeling
that reddit is the wrong forum for this. Since reddit is primarily a
news site it penalises old submissions and eventually moves them out
of the front page. I can't see how that behavior is good for our
purposes here. A project proposal that has a thousand upvotes
shouldn't be moved from the list just because the proposal itself is
old.

If we want something that works in the long run we want something like
reddit but without the aging of old submissions. I don't know of any
such thing but there's bound to be one, this being the internet after
all.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Philip Wadler video on Howard-Curry Correspondence ???

2008-11-27 Thread Josef Svenningsson
2008/11/27 Galchin, Vasili [EMAIL PROTECTED]:
 Hello,

 I am reading re-reading Prof. Wadler paper

 Proofs are Programs: 19th Century Logic and 21st Century Computing

 but also want to re-read watch his video on same subject.

Is it this talk you're after?
http://video.google.com/videoplay?docid=-4167170843018186532ei=sI0uSZT7Faf22gKd9NTqDQq=wadler+philip

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compilers

2008-11-26 Thread Josef Svenningsson
On Wed, Nov 26, 2008 at 11:14 PM, David Menendez [EMAIL PROTECTED] wrote:
 How old is nhc? I've always thought of it as one of the big three,
 but I don't really know how far back it goes compared to ghc.

The following page suggests that it was released mid 1994 but there
could of course have been earlier releases.
http://www.cs.chalmers.se/pub/haskell/nhc/old/

Perhaps Malcolm Wallace knows more.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] a really juvenile question .. hehehehe ;^)

2008-10-06 Thread Josef Svenningsson
On Mon, Oct 6, 2008 at 2:58 PM, Cale Gibbard [EMAIL PROTECTED] wrote:
 2008/10/6 Don Stewart [EMAIL PROTECTED]:
 dagit:
data and newtype vary in one more subtle way, and that's how/when they
evaluate to bottom.  Most of the time they behave identically, but in the
right cases they act sightly differently.  newtype is usually regarded as
more efficient than data.  This is because the compiler can choose to
optimize away the newtype so that it only exists at type check time.  I
think this is also possible with data in some, but not all, uses.

 The compiler *must* optimise away the use. They're sort of 'virtual'
 data, guaranteed to have no runtime cost.

 I'm not sure that I'd want to be that emphatic about what an
 implementation *must* do regarding something so operational.

 [..]

 We can say however that newtypes have no additional runtime cost in
 GHC regardless of the optimisation level you pick.

Not even that is true in general. One can in general end up doing
unnecessary work just for the sake of converting types.

Suppose you have a newtype Price = Price Int and you're given [Int]
and want to have [Price]. This is simple to do, just 'map Price'. But
since Price and Int are represented the same way this ought to be just
the identity function. But it is in general very difficult for a
compiler to figure out that this traversal of the list in fact is just
the identity function. Simple type conversions like these can
unfortunately force you to do some work even though the representation
is identical.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Red-Blue Stack

2008-09-26 Thread Josef Svenningsson
On Fri, Sep 26, 2008 at 7:18 PM, Stephan Friedrichs
[EMAIL PROTECTED] wrote:
 apfelmus wrote:
 [..]

 Persistent data structures are harder to come up with than ephemeral
 ones, [...]

 Yes, in some cases it's quite hard to find a persistent solution for a
 data structure that is rather trivial compared to its ephemeral
 counterpart. My question is: Is there a case, where finding a persistent
 solution that performs equally well is *impossible* rather than just
 harder? I mean might there be a case where (forced) persistence (as we
 have in pure Haskell) is a definite disadvantage in terms of big-O
 notation? Do some problems even move from P to NP in a persistent setting?

The only result I'm aware of is that of Nicholas Pippenger where he
shows that there are algorithms which are slower by a factor of log n
if one is not allowed to use mutation:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.36.670

Interestingly enough, this particular result does not carry over to
Haskell. The particular algorithm that he uses can actually be
implemented optimally using lazy evaluation, as show in the following
paper:
http://progtools.comlab.ox.ac.uk/members/oege/publications/jfp97

So a pure strict language is less efficient than a strict language
with mutation and a pure lazy language. Although intuitively a pure
lazy language should also be less efficient than a strict language
with mutation I'm not aware of any such results.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Call Graph Tool?

2008-06-27 Thread Josef Svenningsson
On Fri, Jun 27, 2008 at 12:39 PM, Claus Reinke [EMAIL PROTECTED] wrote:
 Assuming I get it included, is there any features in particular you'd
 want to
 see in there?  Note that if I do have it produce visualisations, they'll
 be
 static images as part of an analysis report rather than being
 interactive.

 I'd like the ability to show individual module dependencies, and then
 to collapse modules in one package to just the package, so I could
 zoom out and see how the packages relate to each other.  By
 package here I mean the A in A.B, A.C, etc.

 It seems like it would be fairly simple to use Language.Haskell.Parse
 to turn a set of modules into a graph, and then something to massage
 that and give it to dot.

 If you wanted to go down that route, try using 'ghc --make -v2'
 and translate that dependency graph to dot.

Yep, this is a pretty easy route and there is already a tool for doing
the translation: ocamldot. Don't be fooled by the name, it works on
Makefile's dependency information and in particular works well with
the output from ghc. Back in the days (like six years ago) I used it
on ghc itself. I also extended it so that different directories were
grouped together inside a box to get a better feel for the intended
structure of the program. ocamldot can be found here:
http://www.research.att.com/~trevor/ocamldot/

I should add that I didn't find the information the least bit helpful
so my general recommendation is to try to find some other method to
help understanding code.

All the best,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: ghc on Ubuntu Linux?

2008-05-18 Thread Josef Svenningsson
On Sun, May 18, 2008 at 3:24 AM, Galchin, Vasili [EMAIL PROTECTED] wrote:
 Hi Josef,

  What generates dist/setup-config? When I run runhaskell Setup.hs
 configure, nothing including dist/setup.config gets generated. ??

Ok, that sounds more alarming. Unfortunately I have no idea what could
be the cause of this. Hopefully someone with a bit more cabal
knowledge can help out.

Good luck,

Josef


 On Sat, May 17, 2008 at 11:02 AM, Josef Svenningsson
 [EMAIL PROTECTED] wrote:

 On Sat, May 17, 2008 at 1:00 PM, Galchin, Vasili [EMAIL PROTECTED]
 wrote:
  Josef,
 
  E.g.
  [EMAIL PROTECTED]:~/FTP/Haskell/unix-2.2.0.0$ runhaskell Setup.hs
  configure
  Configuring unix-2.3.0.0...
 
  Normally copious output ... ???
 
 That's what it should look like! It just means you have a recent
 version of cabal which doesn't spit out tons of information when
 configuring. Happily all is well.

 /Josef


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: ghc on Ubuntu Linux?

2008-05-17 Thread Josef Svenningsson
Vasili,

I have pretty much exactly the same set up as you seem to have. I
haven't had a single problem with running configure using cabal. In
what sense does it stop working?

Cheers,

Josef

2008/5/17 Galchin, Vasili [EMAIL PROTECTED]:
 PS I have always installed ghc first via the Ubuntu package installer
 followed by a build from ghc 6.8.2 source!

 Vasili


 On Sat, May 17, 2008 at 12:01 AM, Galchin, Vasili [EMAIL PROTECTED]
 wrote:

 ghc-pkg list gives me 


 [EMAIL PROTECTED]:~$ ghc-pkg list
 /usr/local/lib/ghc-6.8.2/package.conf:
 Cabal-1.2.3.0, GLUT-2.1.1.1, HUnit-1.2.0.0, OpenGL-2.2.1.1,
 QuickCheck-1.1.0.0, array-0.1.0.0, base-3.0.1.0,
 bytestring-0.9.0.1, cgi-3001.1.5.1, containers-0.1.0.1,
 directory-1.0.0.0, fgl-5.4.1.1, filepath-1.1.0.0, (ghc-6.8.2),
 haskell-src-1.0.1.1, haskell98-1.0.1.0, hpc-0.5.0.0, html-1.0.1.1,
 mtl-1.1.0.0, network-2.1.0.0, old-locale-1.0.0.0, old-time-1.0.0.0,
 packedstring-0.1.0.0, parallel-1.0.0.0, parsec-2.1.0.0,
 pretty-1.0.0.0, process-1.0.0.0, random-1.0.0.0, readline-1.0.1.0,
 regex-base-0.72.0.1, regex-compat-0.71.0.1, regex-posix-0.72.0.2,
 rts-1.0, stm-2.1.1.0, template-haskell-2.2.0.0, time-1.1.2.0,
 unix-2.3.0.0, xhtml-3000.0.2.1


 I am using version ghc-6.8.2.


 V

 On Fri, May 16, 2008 at 11:51 PM, Galchin, Vasili [EMAIL PROTECTED]
 wrote:

 Hello,

  I am running ghc and tools on Ubuntu. I seem to be running into very
 strange problems between ghc-pkg and the Ubuntu package manager. Suddenly
 runhaskell Setup.hs configure just stops working ... Are any other
 ghc/Ubuntu users running into problems? If so, please tell me what problems.
 This is killing my Haskell work!

 Regards, Vasili



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: strictness of interpreted haskell implementations

2008-04-25 Thread Josef Svenningsson
On Fri, Apr 25, 2008 at 9:17 PM, Duncan Coutts
[EMAIL PROTECTED] wrote:

  On Fri, 2008-04-25 at 09:08 -0700, Don Stewart wrote:
   Geraint.Jones:
Are there well-known differences in the implementations of Haskell in
ghci and hugs?  I've got some moderately intricate code (simulations
of pipelined processors) that behave differently - apparently because
ghci Haskell is stricter than hugs Haskell, and I cannot find any
obviously relevant claims about strictness in the documentation.

  I think they should give the same answer. It sounds like a bug in one
  implementation or the other.

I suspect this might be a library thing. If ghc and hugs uses
different versions of the library and some function had its strictness
property changed then that might account for the discrepancy.

   Hugs does no optimisations, while GHC does a truckload, including
   strictness analysis. Some of these optimisations prevent space leaks.

  Though none should change the static semantics.

That was my initial reaction as well, but then I recalled that some of
ghc's optimizations actually changes the strictness behavior. The
foldr/build transformation for instance can actually change the
strictness of a function such that you can actually observe it. So we
can't rule out that ghc is doing something it shouldn't be doing.

  Post the code. Even if you don't have time to track down the difference,
  someone might.

Yep, without the code we're just fumbling in the dark.

Cheers,

Josef
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] STM example code

2008-03-17 Thread Josef Svenningsson
2008/3/9 Galchin Vasili [EMAIL PROTECTED]:
   I am playing around with the STM API. I would like to see examples of
 STM other than the Santa.hs as I am having problems with STM vs IO.

Here's my implementation of the Dining Philosophers in STM:
http://computationalthoughts.blogspot.com/2008/03/some-examples-of-software-transactional.html

I hope you'll find it helpful.

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: STM and fairness

2008-03-05 Thread Josef Svenningsson
Tim, Simon,

Thanks for your detailed descriptions. Much of my understanding was
confirmed. I'll see if I can send you a patch with my suggested fix as
soon as my teaching is over.

Thanks,

Josef

On Mon, Mar 3, 2008 at 2:03 PM, Tim Harris (RESEARCH)
[EMAIL PROTECTED] wrote:
 Hi,

  At the moment we don't make any particular effort to make threads runnable 
 in some specific order when they are unblocked.  The current implementation 
 is simply what was easiest to write.

  If I remember rightly threads blocked on TVars will initially be 
 half-woken, putting them on the same run-queue as their waker and leaving 
 the STM data structures intact.  When scheduled they will check whether or 
 not the TVars' contents differ from the values that caused them to block: if 
 the values are unchanged then a thread can block again without needing to 
 build up wait queue structures.  In Simon's example of 100 threads blocked on 
 a single-cell TVar buffer, this would mean 99 of them are validated and block 
 again without needing to re-execute the rest of the transaction containing 
 the TVar access.  This will probably happen within a single OS thread so 
 these are lightweight thread switches within the GHC run time rather than 99 
 OS thread switches.

  At some point it might be nice to look at using run-time feedback about how 
 individual TVars are used.  I suspect that, looking at it dynamically, there 
 are a few simple policies that would apply to most TVars (wake-all / 
 wake-one) with the caveat that anything other than wake-all must eventually 
 fall back to wake-all to preserve the intended semantics for retry.

  NB -- when thinking about a shared buffer built over TVars there's also the 
 possibility that a non-blocked thread will consume the resource ahead of a 
 blocked thread that has been woken.  As with programming with 
 locks/condition-variables, avoiding this case would need an explicit queue of 
 consumers to be maintained by the application (and symmetrically for 
 producers).

  In any case, running threads in something approximating the same order they 
 blocked sounds sensible to me.  The lists of threads blocked on a TVar are 
 doubly-linked (right?) so wouldn't need to be explicitly reversed.

  Tim








  -Original Message-
  From: Simon Peyton-Jones
  Sent: 29 February 2008 20:06
  To: Josef Svenningsson; glasgow-haskell-users@haskell.org
  Cc: Tim Harris (RESEARCH)
  Subject: RE: STM and fairness

  | I'd like to know a bit about the STM implementation in GHC,
  | specifically about how it tries to achieve fairness. I've been reading
  | Composable Memory Transactions but it does not contain that much
  | details on this specific matter. What I want to know boils down to
  | this: what order are processes run which have been woken up from a
  | call to retry?

  Tim is the one who implemented this stuff, so I'm ccing him.

  If threads queue up on a single MVar, it's obvious how to achieve fairness 
 of a sort.  Furthremore, if 100 threads are blocked on one MVar, the 
 scheduler can wake up exactly one when the MVar is filled.  With STM it's 
 much less obvious.

  First, a thread may block on a whole bunch of TVars; if any of them are 
 changed, the thread should re-run.  So there is no single list of threads to 
 reverse or not reverse.

  Second, if 100 threads are blocked on a TVar, t, waking up just one of them 
 may not suffice -- it may read some more TVars and then retry again, 
 re-blocking itself on t (plus some more). The only simple thing to do is to 
 wake all of them up.  In common situations (e.g. a buffer), we may wake up 
 all 100 threads, only for 99 of them to lose the race and block again.

  This arises from the fact that transactions do a wonderful thing, by letting 
 you perform multiple operations atomically -- but that makes it harder to 
 optimize.


  All that said, you may well be right that one could do a better job of 
 scheduling.  For example, even though there may be lots of threads blocked on 
 a TVar, and all must be made runnable, they could perhaps be run in the same 
 order that they blocked, so the longest-blocked got to run first.   I don't 
 think we try to do that, but Tim would know.

  By all means suggest a patch!

  Simon

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


STM and fairness

2008-02-29 Thread Josef Svenningsson
Hi,

I'd like to know a bit about the STM implementation in GHC,
specifically about how it tries to achieve fairness. I've been reading
Composable Memory Transactions but it does not contain that much
details on this specific matter. What I want to know boils down to
this: what order are processes run which have been woken up from a
call to retry? When programming with condition variables the standard
behaviour is that the process which has waited the longest is the
first one to get to run. But that doesn't seem to be the behaviour
here. Consider the following program:
\begin{code}
module STMFair where

import Control.Concurrent
import Control.Concurrent.STM

test n = do v - newTVarIO 0
mapM_ (\n - forkIO (process n v) 
 threadDelay delay) [1..n]
atomically (writeTVar v 1)
threadDelay delay

delay = 50

process id var = do putStrLn (Process  ++ show id ++  started)
atomically $ do
  v - readTVar var
  if v == 0
then retry
else return ()
putStrLn (Process  ++ show id ++  finished)
\end{code}

When I run 'test 2' I expect it to print:
Process 1 started
Process 2 started
Process 1 finished
Process 2 finished

This would correspond to the oldest process being executed first. But
that is not what happens instead I get this (ghci 6.8.2, Ubuntu
Linux):
Process 1 started
Process 2 started
Process 2 finished
Process 1 finished

This is certainly not the behaviour I would want. I discovered this
behaviour when implementing the dining philosophers using STM and
there one of the philosophers gets starved. Except, that he's not
quite starved. When I run the simulation long enough he will
eventually be able to eat but then for a long time there will be some
other philosopher that is starved. I find this behaviour very
mysterious and it would be nice to have some light shed on it.

Apart from this mysterious behaviour it seems quite easy to improve
the fairness of the implementation. From my examples above it seems
that the wait queues for a transactional variable do contain the
processes in the order they call retry (try running 'test n' for some
large n). It just seems that they are given to the scheduler in the
wrong order, so all that needs to be done is to reverse the list. Am I
right?

Thanks for reading,

Josef
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: STM and fairness

2008-02-29 Thread Josef Svenningsson
On Fri, Feb 29, 2008 at 4:27 PM, Roberto Zunino [EMAIL PROTECTED] wrote:
 Josef Svenningsson wrote:
   What I want to know boils down to
   this: what order are processes run which have been woken up from a
   call to retry?

  IIUC, the order of wake up is irrelevant, since *all* the threads will
  re-run the transaction in parallel. So, even if thread 1 is the first to
  wake up, thread 2 might beat it in the race, and complete its
  transaction first.

That's not quite right since there is no true parallelism here. I'm
running on a single core (which I suppose I could have mentioned) and
so it is up the scheduler to make sure that processes get a fair
chance at doing their business, i.e. achieving fairness. The point I
was trying to make is that the scheduler isn't doing a very good job
in this case.

  I suggest you put some random delay in your fairness tests, maybe using
  unsafeIOtoSTM, so that you can improve starvation ;-)

I'd rather fix the scheduler.

  Also, try running a very slow (much-delayed) transaction againts several
  fast ones. I expect the slow one will never reach completion.

Indeed. This is a well known problem with STM but afaict orthogonal to
the problem I'm talking about.

  AFAIK, achieving fairness in STM can be quite hard (not unlike other
  mainstream approaches to concurrency, sadly).

Yes. Still, in the particular situation I showed I think we can do a
better job than what is currently being done.

Cheers,

Josef
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] lazy evaluation

2008-02-06 Thread Josef Svenningsson
On Feb 6, 2008 3:06 PM, Miguel Mitrofanov [EMAIL PROTECTED] wrote:

 On 6 Feb 2008, at 16:32, Peter Padawitz wrote:

  Can anybody give me a simple explanation why the second definition
  of a palindrome checker does not terminate, although the first one
  does?
 
  pal :: Eq a = [a] - Bool
  pal s = b where (b,r) = eqrev s r []
 
  eqrev :: Eq a = [a] - [a] - [a] - (Bool,[a])
  eqrev (x:s1) ~(y:s2) acc = (x==yb,r) where (b,r) = eqrev s1 s2
  (x:acc)
  eqrev _ _ acc  = (True,acc)

 I.eqrev  (_|_) acc = (True, acc)
 II.a. eqrev 1 (_|_)  = ('1' == (_|_)  b, r) where (b,r) = eqrev
  (_|_) 1
By (I), (b,r) = (True, 1), so eqrev 1 (_|_)  = ((_|_),1)
 II.b. eqrev 1 1  = ('1' == '1'  b, r) where (b,r) = eqrev 
  1
(b,r) = (True,1), so eqrev 1 1  = (True,1)

 Therefore, the least fixed point of \r - eqrev 1 r  is 1 and
 the answer is True.

  pal :: Eq a = [a] - Bool
  pal s = b where (b,r) = eqrev' s r
 
  eqrev' :: Eq a = [a] - [a] - (Bool,[a])
  eqrev' (x:s1) ~(y:s2) = (x==yb,r++[y]) where (b,r) = eqrev' s1 s2
  eqrev' _ _   = (True,[])

 I.  eqrev'  (_|_) = (True,[])
 II.a. eqrev' 1 (_|_) = ('1' == (_|_)  b, r ++ [(_|_)]) where (b,r)
 = eqrev'  (_|_)
By (I), (b,r) = (True,[]), so eqrev' 1 (_|_) = ((_|_),[(_|_)])
 II.b. eqrev' 1 [(_|_)] = ('1' == (_|_)  b, r ++ [(_|_)]) where
 (b,r) = eqrev'  []
(b,r) = (True,[]), so eqrev' 1 [(_|_)] = ((_|_),[(_|_)])
 Therefore, the least fixed point of \r - eqrev' 1 r is [(_|_)] and
 the answer is (_|_). No wonder it hangs.

This proof also shows us where the problem lies and how to fix it. It
turns out to be really easy: change 'r++[y]' to 'r++[x]' and the
program works.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Missing join and split

2008-01-01 Thread Josef Svenningsson
On Dec 28, 2007 11:40 PM, Mitar [EMAIL PROTECTED] wrote:
 Would not it be interesting and useful (but not really efficient) to
 have patterns something like:

 foo :: Eq a = a - ...
 foo (_{4}'b') = ...

 which would match a list with four elements ending with an element 'b'. Or:

 foo (_+';'_+';'_) = ...

 which would match a list with embedded two ';' elements. (Last _
 matched the remaining of the list.)

I suggest you take at look at HaRP, Haskell Regular Patterns:
http://www.cs.chalmers.se/~d00nibro/harp/

It hasn't been updated for a while but it should still be useable.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Graph theory analysis of Haskell code

2007-12-06 Thread Josef Svenningsson
This sounds like a fun project and it is certainly feasible to do. I
thought I'd give you some pointers to fun stuff that people have been
doing in the past.

Thomas Reps have been doing program analysis since the dawn of time,
but one paper that seems particularly related to what you try to do is
Identifying modules via concept analysis. You can find that and the
rest of his papers on his homepage:
http://pages.cs.wisc.edu/~reps/

There are many different characteristics of a program graph that can
be of interest. One that has recently given rise to some interesting
results is the notion of tree width. An example of it's is the
following paper:http://citeseer.ist.psu.edu/601409.html

Have fun,

Josef

On Dec 6, 2007 12:46 AM, Ivan Miljenovic [EMAIL PROTECTED] wrote:
 This isn't strictly Haskell related, but anyway.

 Next year I will be doing my honours in mathematics.  One possible
 topic for my thesis that I've thought of - and my supervisor is quite
 enthused about - is to use graph theory to analyse various textual
 sources, starting with source code but leaving the framework open
 enough to be able to extend it to other sources (e.g. email address
 books).

 How I envisage it happening is that a parser would be used to find all
 functions in the given code, treat these as nodes in the graph and
 then use directed edges to indicate which functions call other
 functions.  This resultant graph can then be analysed in various ways
 suitable to the context (e.g. find that a library module can be split
 into two since there are two completely separate trees present in the
 graph that don't interact at all, or if a function is only ever called
 by one other function then it can be subsumed into it).

 So, here is the question I ask of all of you: is this feasible?  Do
 you know if anything like this has ever been attempted before?  I know
 there are some other usages of graph theory related to source code
 (e.g. McCabes complexity metric [1]), but I couldn't seem to find
 anything related to what I'm proposing.  I intend to code this up in
 Haskell (possibly using FGL: I know of it, but haven't really looked
 at it) and use Haskell as my primary target for analysis, so in a
 sense the resultant graph could be seen as a Haskell equivalent to
 UML.


 [1] http://en.wikipedia.org/wiki/Cyclomatic_complexity

 --
 Ivan Lazar Miljenovic
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] -O2 bug in GHC 6.8.1?

2007-11-20 Thread Josef Svenningsson
On Nov 20, 2007 4:32 PM, Ian Lynagh [EMAIL PROTECTED] wrote:

 Hi Brad,

 On Tue, Nov 20, 2007 at 09:50:02PM +1000, Brad Clow wrote:
 
  $ ./test
  23
  24

 I can't reproduce this. Can you please tell us what platform you are on
 (e.g. x86_64 Linux) and what gcc --version says?

 Also, where did your GHC come from, e.g. bindists from the download
 page, self-compiled?

I can also reproduce this on an Ubuntu machine with a self-compiled
ghc. gcc is 4.1.2.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[2]: [Haskell-cafe] Fusing foldr's

2007-10-30 Thread Josef Svenningsson
On 10/29/07, Bulat Ziganshin [EMAIL PROTECTED] wrote:
 you may also look at these data:

   1,225,416 bytes allocated in the heap
 152,984 bytes copied during GC (scavenged)
   8,448 bytes copied during GC (not scavenged)
  86,808 bytes maximum residency (1 sample(s))

   3 collections in generation 0 (  0.00s)
   1 collections in generation 1 (  0.00s)

 if your hypothesis is true, amount of data copied and number of
 generation-1 collection should be much less in the second case

Indeed.

avg4:
880,935,612 bytes allocated in the heap
319,064,404 bytes copied during GC (scavenged)
318,965,812 bytes copied during GC (not scavenged)
201,080,832 bytes maximum residency (9 sample(s))

   1681 collections in generation 0 (  1.67s)
  9 collections in generation 1 ( 13.62s)

avgP:
1,761,224,604 bytes allocated in the heap
714,644 bytes copied during GC (scavenged)
593,184 bytes copied during GC (not scavenged)
184,320 bytes maximum residency (2 sample(s))

   1908 collections in generation 0 (  0.04s)
  2 collections in generation 1 (  0.00s)

Allocation is cheap, copying expensive.

All the best,

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Letting the darcs test fail, if QuickCheck tests fail

2007-10-30 Thread Josef Svenningsson
On 10/30/07, Henning Thielemann [EMAIL PROTECTED] wrote:

 When following the description on
  
 http://www.haskell.org/haskellwiki/How_to_write_a_Haskell_program#Add_some_automated_testing:_QuickCheck
   then darcs will run the QuickCheck tests on each 'darcs record', but the
 new patch is also accepted by darcs if one of the tests fail. What is the
 most simple way to let 'darcs record' fail, when a QuickCheck test fails?

The same thing bit me when I prepared a package recently. The way I
solved it was to call the function quickCheck' instead of test. It
returns a boolean indicating if the test was successful or not. If
it's false I call exitWithFailure. I posted some code to the wikibook:
http://en.wikibooks.org/wiki/Talk:Haskell/Packaging

Note that quickCheck' is only available in QuickCheck 2.

All the best,

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fusing foldr's

2007-10-29 Thread Josef Svenningsson
On 10/28/07, Isaac Dupree [EMAIL PROTECTED] wrote:
 Josef Svenningsson wrote:
  Less bogus timing:
  avg4: 18.0s
  avgS: 2.2s
  avgP: 17.4s
 
  OK, so these figures make an even stronger case for my conclusion :-)
  Single traversal can be much faster than multiple traversals *when
  done right*.

 Did you use +RTS -N2 on your program (or whatever it is that makes GHC
 actually use multiple threads? or is that not necessary?)  Anyway I
 assume you wouldn't get better than 9.0s, which is still much worse than
 2.2s.

Oh, this is getting embarrassing.. Indeed, I forgot to use +RTS -N2.
But using those flags yielded a very interesting result:

avgP: 4.3s

Superlinear speedup!? As you say, I would have expected something
slightly larger than 9s. I think what happens here is that for avg4
the entire list has to be kept in memory between the two traversals
whereas for avgP the beginning of the list can be garbage collected
incrementally as the two threads traverse it. This could mean that the
list never moves to the second generation in the memory manager and
that can maybe account for the additional time savings. I'm not sure
how to verify that this is the case though.

Cheers,

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fusing foldr's

2007-10-29 Thread Josef Svenningsson
On 10/29/07, Josef Svenningsson [EMAIL PROTECTED] wrote:
 But using those flags yielded a very interesting result:

 avgP: 4.3s

 Superlinear speedup!? As you say, I would have expected something
 slightly larger than 9s. I think what happens here is that for avg4
 the entire list has to be kept in memory between the two traversals
 whereas for avgP the beginning of the list can be garbage collected
 incrementally as the two threads traverse it. This could mean that the
 list never moves to the second generation in the memory manager and
 that can maybe account for the additional time savings. I'm not sure
 how to verify that this is the case though.

Bulat kindly suggested I use +RTS -s to monitor the garbage collectors
behavior. It seems my hypothesis was right.

avg4:
387 Mb total memory in use
MUT   time2.43s  (  2.47s elapsed)
GCtime   15.32s  ( 16.05s elapsed)

avgP (+RTS -N2):
3 Mb total memory in use
MUT   time4.61s  (  2.51s elapsed)
GCtime0.04s  (  0.06s elapsed)

So it seems that the garbage collector takes an awful lot of time when
we allocate a big list like this. Hmmm. Strikes me as somewhat
suboptimal.

Cheers,

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fusing foldr's

2007-10-27 Thread Josef Svenningsson
On 10/26/07, Dan Weston [EMAIL PROTECTED] wrote:
 Thanks for letting me know about the Data.Strict library on Hackage. I
 will definitely make use of that! BTW, you left out an import
 Data.List(foldl') in your example.

Yes, Data.Strict can be pretty handy for getting the right strictness.
Sorry about the missing import.

 My timing test is an order of magnitude worse than yours. Do you have an
 extra zero in your list endpoint?

   I fed these functions to ghc with the -O2 and -threaded flags and
   timed them using the list [1..1000]. The result (best times out of
   several runs):
   avg4: 284 ms
   avgS: 184 ms
   avgP: 248 ms

 Using ghc -threaded -O2 --make Avg.hs for ghc 6.6.1, I ran your tests on
 [1..1000] and got the user times:

 avg4: 12.75 s
 avgS:  3.65 s
 avgP: 15.56 s

 The funny thing is that avg4/avgS = 3.5 for and only 1.5 for you. I
 understand that with only 1 processor my avgP time may be twice yours,
 but not the avgS or avg4.

Oooops.. My numbers are totally bogus. I had code that looked like the
following:
\begin{code}
main = do
time avg4 [1..1000]
time avg4 [1..1000]
time avg4 [1..1000]
time avgS [1..1000]
time avgS [1..1000]
time avgS [1..1000]
time avgP [1..1000]
time avgP [1..1000]
time avgP [1..1000]
\end{code}
Not very elegant I know but I thought it would do the job. Apparently
I was wrong. GHC does common subexpression elimination on all the
lists so they're all shared between the different calls. Of course,
the first function call would always take long time but I ignored it,
thinking it was some anomaly. Anyway, I was totally sure that GHC only
did cse on constructor expressions and not on arbitrary computations.
Guess I was wrong. A little searching revealed the following quote by
Simon PJ:

 GHC does a very simple form of CSE. If it sees
let x = e in e
 it replaces the inner 'e' by x.  But that's all at the moment.

Lesson learned.

Less bogus timing:
avg4: 18.0s
avgS: 2.2s
avgP: 17.4s

OK, so these figures make an even stronger case for my conclusion :-)
Single traversal can be much faster than multiple traversals *when
done right*.

 I have the following machine:

 Main memory size: 2026 Mbytes
 Num Processors: 1
 Processor Type: Intel(R) Xeon(TM) CPU 2.80GHz x32
 Clock Speed: 2790 MHZ

In case you're still interested my machine looks like this:

Memory: 2026 Mbytes
Processor: AMD Turion(tm) 64 X2 Mobile Technology TL-56
Clock Speed: 1800MHz

All the best,

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fusing foldr's

2007-10-26 Thread Josef Svenningsson
Sorry for reacting so late on this mail. I'm digging through some old mails...

On 10/12/07, Dan Weston [EMAIL PROTECTED] wrote:
 Always check optimizations to make sure they are not pessimizations!

 Actually, traversing the list twice is very cheap compared to space
 leakage, and accumulating pairs requires tuple boxing and unboxing which
 I don't know how to get GHC not to do.

I agree hole-heartedly that replacing multiple traversals with a
single traversal should be done with care as it more often than not
results in a pessimization. Indeed you showed just that with your
examples! But I'd thought it'd be interesting to see how it can
actually be an improvement if done carefully.

\begin{code}
import Control.Arrow
import qualified Data.Strict.Tuple as T
import Data.Strict.Tuple (Pair(..))
import Control.Parallel

avg4 = uncurry (/) . (foldl' (+) 0  foldl' (\x y - x + 1) 0)
avgS = T.uncurry (/) . foldl' (\p n - ((+n) *!* (+1)) p) (0 :!: 0)
avgP = uncurry (/) . (foldl' (+) 0 ! foldl' (\x y - x + 1) 0)

(*!*) f g (a :!: b) = f a :!: g b

(!) f g a = fa `par` (fa,ga)
  where fa = f a
ga = g a
\end{code}

avg4 is the function which was best among Dan's benchmarks. avgS uses
strict tuples. I just threw in avgP for fun, it traverses the lists in
parallel. Note: I do have a dual core machine so it makes sense to try
avgP.

I fed these functions to ghc with the -O2 and -threaded flags and
timed them using the list [1..1000]. The result (best times out of
several runs):
avg4: 284 ms
avgS: 184 ms
avgP: 248 ms

It seems doing a single traversal can be faster if your write your
function carefully. Doing the traversal in parallel was beneficial but
not as good as the single traversal.

Cheers,

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Binary constants in Haskell

2007-10-25 Thread Josef Svenningsson
On 10/24/07, Neil Mitchell [EMAIL PROTECTED] wrote:
 Hi

   Are there binary constants in Haskell, as
   we have, for instance, 0o232 for octal and
   0xD29A for hexadecimal?
 
  No, though it is an interesting idea.

 You can get pretty close with existing Haskell though:

 (bin 100010011)

 where bin :: Integer - Integer, and is left as an exercise for the
 reader. Obviously its not as high performance, as proper binary
 literals, but if you write them as top-level constants, they'll only
 be computed once and shouldn't end up being in the performance
 critical bits.

To make it efficient you could use Template Haskell and have the bin
function generate the constant which could then be spliced in. I
suppose it would look something like:
$(bin 100010011)

Not too bad.

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Prevent optimization from tempering with unsafePerformIO

2007-10-17 Thread Josef Svenningsson
On 10/17/07, Bernd Brassel [EMAIL PROTECTED] wrote:
  why do you want to do this unsafely,
  instead of just using 'length'?  unsafePerformIO is a very slow
  function, remember)

 The big picture is that we generate programs like this example in order
 to compile the functional logic language Curry down to Haskell.
 Basically, there are two ways to do this:

 a) write an interpreter for Curry in Haskell, e.g., by employing
 non-determinism monads
 b) extend Haskell by employing side effects

 Alternative a) is not really an issue for us. Doing it this way, all
 Curry programs will suffer in performance in such a magnitude that - in
 comparison - unsafePerformIO is super-fast. In addition, we already have
 interpreters for Curry which I do not assume a Haskell interpreter to
 outperform without 5 years worth of tuning.

 Alternative b) is the choice, because doing it this way, all
 deterministic, i.e., purely functional parts of Curry programs would
 take advantage of Haskell being a functional language. If then the logic
 parts are not so fast, e.g., because unsafePerformIO is slow, this does
 not matter so much. In comparison to other approaches (like Alternative
 a) and many other implementations of Curry) our slogan is: make
 functions fast and logic possible. Fast functions will outweigh the
 slowdown for logics. But to get functions fast employing optimization
 sounds like a good idea to me. But even without any optimization, our
 system can compare well to most other implementations for many
 applications.

May I suggest a third route that has the advantages of both your
approaches. The backside is of course that it takes a bit of work. My
suggestion is to do an effect analysis of your curry programs to
identify the purely functional parts and compile them directly to pure
Haskell. The rest of the programs can run in a monad. This approach
should be more robust than relying on unsafePerformIO. It is also very
much in the spirit of your slogan.

Just my 2 cents,

/Josef
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Re: Mutable but boxed arrays?

2007-09-06 Thread Josef Svenningsson
On 9/6/07, Simon Marlow [EMAIL PROTECTED] wrote:
 Ketil Malde wrote:
  I, on the other hand, have always wondered why the strict arrays are
  called unboxed, rather than, well, strict?  Strictness seems to be
  their observable property, while unboxing is just an (admittedly
  important) implementation optimization.  I imagine that it'd be at least
  as easy to implement the strictness as the unboxedness for non-GHC
  compilers, and thus increase compatibility.

 You're quite right, that was a mistake, we should have called them strict
 arrays.  Hugs implements the unboxed arrays without any kind of unboxing, I
 believe.

Any chance of a Data.Array.Strict and deprecating Data.Array.Unboxed?

Cheers,

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Small question

2007-08-10 Thread Josef Svenningsson
On 8/10/07, John Meacham [EMAIL PROTECTED] wrote:
 On Thu, Aug 09, 2007 at 06:37:32PM +0100, Andrew Coppin wrote:
  Which of these is likely to go faster?
   type Quad = (Bool,Bool)
 ...
   data Quad = BL | BR | TL | TR
 ...
  I'm hoping that the latter one will more more strict / use less space.
  But I don't truely know...

 The second one will be signifigantly better for a couple reasons. A
 simple counting of values that they can take on will show not only this
 but that they are not isomorphic even,

 (Bool,Bool) can be one of

 _|_
 (True,True)
 (True,False)
 (False,True)
 (False,False)
 (_|_,True)
 (_|_,False)
 (_|_,_|_)
 (True,_|_)
 (False,_|_)

 that is a total of 10 different cases, each time a bottom might appear,
 a thunk evaluation (or indirection) is involved.


 now, take the second case

 data Quad = BL | BR | TL | TR

 the possible values are

 _|_
 BL
 BR
 TL
 TR

 a whole half of the other representation.


Well, there are ways to improve the situation. If you want to remove
all the bottoms in your type you can define Quad as:

type Quad = Data.Strict.Tuple.Pair Bool Bool

I'm not sure how much speed this will gain in practice and whether it
will beat the four constructor data type though. If anyone does some
measurements it'd be interesting to know.

Cheers,

Josef

PS. Data.Strict.Tuple lives in the strict package which can be found here:
http://www.cse.unsw.edu.au/~rl/code/strict.html
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Re: Re: monad subexpressions

2007-08-03 Thread Josef Svenningsson
On 8/3/07, Chris Smith [EMAIL PROTECTED] wrote:
 Neil Mitchell [EMAIL PROTECTED] wrote:
  I'm not convinced either, a nice concrete example would let people
  ponder this a bit more.

 I tried to provide something in my response to Simon.  Here it is again:

  One could sugar:

  do tax - getTax
 return $ map (\price - price * (1 + tax)) bill

  into:

  do return $ map (\price - price * (1 + (- getTax))) someNums

I think what Simon is worried about here is that the syntax in the
latter expression suggests that the effects of getTax will be
performed every time the lambda is applied. After all getTax appears
inside the lambda. But in fact is the side effects will only be
performed once. I agree with Simon that (- getTax) shouldn't be
promoted outside a lambda.

Fwiw, I'm all in favor for some new piece of syntax for this problem.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Polymorphic variants

2007-07-25 Thread Josef Svenningsson

On 7/26/07, Jon Harrop [EMAIL PROTECTED] wrote:


Does Haskell have anything similar to OCaml's polymorphic variants?



No as such, but it's possible to simulate them. As always Oleg was the
one to demonstrate how:
http://okmij.org/ftp/Haskell/generics.html

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Practical Haskell question.

2007-07-22 Thread Josef Svenningsson

Michael,

I think what you're trying to do is perfectly doable in Haskell and I think
the right tool for it is arrows, as Tomasz Zielonka mentioned before. I
suggest you take a look at the following paper which uses arrows to enforce
security levels in the code:
http://www.cis.upenn.edu/~stevez/papers/abstracts.html#LZ06a

Cheers,

Josef

On 6/25/07, Michael T. Richter [EMAIL PROTECTED] wrote:


 On Mon, 2007-25-06 at 12:19 +0300, Benja Fallenstein wrote:

2007/6/25, Michael T. Richter [EMAIL PROTECTED]:

 OK, just to prevent this getting side-tracked: I'm absolutely
uninterested in the results of performActionA before determining if
performActionB is permitted/possible/whatever.  Think more in terms of
security permissions or resource availability/claiming than in terms of
chaining results.  I want to know before I begin to collect the results of
performAction* that I will actually stand a chance at getting results at
all.


Uh, the posts you quote were precisely about how to do that. No
side-tracking going on. :-)


It looked to me like there were people arguing about whether the x
returned from one action was going to be used in the next action.

Let me try and rephrase the question.  [image: :)]

A conventional approach to what I'm doing would be something like this (in
bad pseudocode):

doStuff():
if checkPossible([opA, opB, opC]):
A
B
C
else:
exception Preconditions not met

My objection to this is its error-prone scaffolding:

   1. There's no enforced link between the checking operations and the
   actual operations.  By mistake or by deliberate action it is possible to put
   operations in the main body which have not been checked by the guards.
   2. As code evolves and changes, it is very easy to have the check
   diverge from the contents of the body as well.


Now if the actions were trivial or easily reversible, an alternative model
is something like this (in bad pseudocode) where it's assumed that each
operation checks for its privileges/capabilities/whatever as part of its
operation:

doStuff2():
A
try:
B
try:
C
catch:
undoB
throw
catch:
undoA

This looks to me like Don Stuart's executable semi-colons and could be
coded as a pretty conventional monad (unless my eyes are deceiving me).  But
if doing A, say, involved expensive operations (think: generating an RSA key
or making a database connection on a heavily-loaded server) or if doing B
involved modifying some external state that is difficult to undo this is a
less-than-ideal model.  Let's say that C fails for whatever reason
(insufficient privileges, the database server is dead, the phase of the moon
is wrong for the species of chicken sacrificed at the keyboard -- anything),
then we've got time wasted in A and B has just changed something we can't
easily unchange.

So I'd like some way of getting the automated check of
permission/capability/availability/whatever done before performing the
actual actions.

Now in a language where functions are identifiable types, a solution could
look like this (among a myriad of other possible solutions):

check(Operation):
case Operation of:
A:
return checkConditionA
B:
return checkConditionB
C:
return checkConditionC

runStuff(actions):
for each action in actions:
if not check(action.left):
throw CheckFailure
for each action in actions:
action.left(action.right)

doStuff3():
actions=[(A, a_args), (B, b_args), (C, c_args)]
try:
runStuff(actions)
catch CheckFailure:
actions=nil

The check() function called here can use the identity of the action
provided plus any information provided externally (network connections open,
permissions available, etc.) to pass/fail the
capabilities/resources/whatever and the action's execution is deferred until
the check has passed.  The action's check *AND* its execution is unavailable
to the programmer so there's less room for fraud and oversight and all the
other things which make programs buggy and unreliable and such joys to work
with both as a programmer and as a user.  In fact with languages as
malleable as Ruby (or possibly even Python) some of the ugly scaffolding
above could be made to vanish behind the scenes leaving pretty clean code
behind.  (Uglier languages like C/C++, of course, would leave all the
scaffolding lying around, but it would still be doable.)

But of course this can't be done in Haskell this way because functions
aren't items in Haskell.  There is no function equality check.  My check()
function can't look like:

check :: (a-b)
check A = ...
check B = ...
check C = ...
check _ = error no such function

This leaves me in a position I can't think myself out of (hence the cry
for help).  I'd like it to be possible to have a do block with as little
visible scaffolding as possible (ideally *none*) where I 

Re: [Haskell-cafe] reimplementing break (incorrectly) quickcheck p list gives me feedback that it breaks on list, but not what predicate test caused the breakage

2007-07-06 Thread Josef Svenningsson

On 7/6/07, Thomas Hartman [EMAIL PROTECTED] wrote:


I am a total quickcheck noob. Is there a way to find out what predicate test
function is, below?


The trick that I know of to do that is to not generate a function in
the first place, but a data type which can represent various functions
of this type. Whenever we want to use the data type as a function in
the test we convert it to a function. Let's take an example:

data IntFun = Plus5  | Mult5 deriving show

apply :: IntFun - Int - Int
apply Plus = (+ 5)
apply Mult = (* 5)

instance Arbitrary IntFun where
 arbitrary = elements [Plus,Mult]

prop_foo f a = apply f a == a

Ofcourse the data type will typically be much more complicated
depending on what kind of functions you want to be able to generate.

This trick is documented in the following paper:
http://www.st.cs.ru.nl/papers/2006/koop2006-TestingOfHigherOrderFunctionsAPLAS.pdf

HTH,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Abstraction leak

2007-07-01 Thread Josef Svenningsson

On 6/30/07, Jon Cast [EMAIL PROTECTED] wrote:

On Friday 29 June 2007, Jon Cast wrote:
 Here's my solution (drawn from a library I'll be posting Real Soon Now):
snip solution

I forgot to point out that this is 75-90% drawn from a library called
Fudgets[1], which is probably the most extended practical meditation to date
on programming with lazy streams in Haskell.  Embedding that approach in a
monadic interface seems to be my own idea, though.


Koen Claessen had the same idea. He used it for designing parsers. See:
http://www.cs.chalmers.se/~koen/pubs/entry-jfp04-parser.html


Jonathan Cast
http://sourceforge.net/projects/fid-core
http://sourceforge.net/projects/fid-emacs

[1] http://www.md.chalmers.se/Cs/Research/Functional/Fudgets/


Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [GHC] #1468: :browse should browse currently loaded module

2007-06-29 Thread Josef Svenningsson

On 6/29/07, Simon Marlow [EMAIL PROTECTED] wrote:


Josef Svenningsson wrote:
 On 6/28/07, *GHC* [EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:

 #1468: :browse should browse currently loaded module

+---
   Reporter:  [EMAIL PROTECTED]
 mailto:[EMAIL PROTECTED]  |  Owner:
   Type:  feature request| Status:  new
   Priority:  low|  Milestone:
 Component:  GHCi   |Version:  6.6.1
   Severity:  trivial|   Keywords:
 Difficulty:  Easy (1 hr)| Os:  Unknown
   Testcase: |   Architecture:  Unknown

+---
 With module Foo loaded, :browse should be equivalent to :browse Foo.


 Yes! I've wanted this a thousand times. If there was a voting mechanism
 on Trac I would vote for this feature.

Add yourself to the CC list for the bug - this is our low-tech voting
mechanism
for now.



Hmmm, I don't know if it's me being computer illiterate, but I can't seem to
add myself to the CC list after loggin in. In fact not much seems to change
after I've logged in. Could this be a possible side effect from changes you
did to help fix the spam problem a while ago? Help appreciated. If you need
to do some tweaking to my user account my id is: josef.

Cheers,

Josef
___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #1468: :browse should browse currently loaded module

2007-06-29 Thread Josef Svenningsson

On 6/29/07, Ian Lynagh [EMAIL PROTECTED] wrote:

On Fri, Jun 29, 2007 at 12:48:33PM +0200, Josef Svenningsson wrote:

 to do some tweaking to my user account my id is: josef.

I've tweaked your account.


Thanks. I hope I did the right thing when adding myself to the CC list.

Cheers,

Josef
___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [GHC] #1468: :browse should browse currently loaded module

2007-06-28 Thread Josef Svenningsson

On 6/28/07, GHC [EMAIL PROTECTED] wrote:


#1468: :browse should browse currently loaded module

+---
  Reporter:  [EMAIL PROTECTED]  |  Owner:
  Type:  feature request| Status:  new
  Priority:  low|  Milestone:
Component:  GHCi   |Version:  6.6.1
  Severity:  trivial|   Keywords:
Difficulty:  Easy (1 hr)| Os:  Unknown
  Testcase: |   Architecture:  Unknown

+---
With module Foo loaded, :browse should be equivalent to :browse Foo.



Yes! I've wanted this a thousand times. If there was a voting mechanism on
Trac I would vote for this feature.

Josef


--

Ticket URL: http://hackage.haskell.org/trac/ghc/ticket/1468
GHC http://www.haskell.org/ghc/
The Glasgow Haskell Compiler
___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [Haskell-cafe] Functional Data Structures

2007-06-21 Thread Josef Svenningsson

On 6/21/07, Michael T. Richter [EMAIL PROTECTED] wrote:


 Is there a good book or web site outlining decent pure-lazy-functional
data structures, with or without code samples?



Chris Okasaki's publication page is a goldmine when it comes to functional
data structures, lazy and otherwise.
http://www.eecs.usma.edu/webs/people/okasaki/pubs.html

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Higher order types via the Curry-Howard correspondence

2007-05-13 Thread Josef Svenningsson

I think both Benja's and David's answers are terrific. Let me just add
a reference.
The person who's given these issues most thought is probably Per
Martin-Löf. If you want to know more about the meaning of local
connectives you should read his On the Meanings of the Logical
Constants and the Justifications of the Logical Laws [1]. It consists
of three lectures which I think are quite readable although I can
recommend skipping the first lecture, at least on a first read-through
since it's pretty heavy going.
In the beginning of the third lecture you will find the classic quote:
the meaning of a proposition is determined by what it is to verify
it, or what counts as a verification of it
This is essentially what both Benja and David said.

hth,

Josef

[1] http://www.hf.uio.no/ifikk/filosofi/njpl/vol1no1/meaning/meaning.html

On 5/11/07, Benja Fallenstein [EMAIL PROTECTED] wrote:

Adding some thoughts to what David said (although I don't understand
the issues deeply enough to be sure that these ideas don't lead to
ugly things like paradoxes)--

2007/5/10, Gaal Yahas [EMAIL PROTECTED]:
 Since the empty list inhabits the type [b], this theorem is trivially
 a tautology, so let's work around and demand a non-trivial proof by
 using streams instead:

 data Stream a = SHead a (Stream a)
 sMap :: (a - b) - Stream a - Stream b

 What is the object Stream a in logic?

It's not that much more interesting than list. The 'data'
declaration can be read as,

To prove the proposition (Stream a), you must prove the proposition
'a' and the proposition 'Stream a.'

In ordinary logic this would mean that you couldn't prove (Stream a),
of course, but that just corresponds to strict languages in which you
couldn't construct an object of type Stream a (because it would have
to be infinite). To make sense of this, we need to assume a logic in
which we can have similar 'infinite proofs.' (This is the part where
I'm not sure it's really possible to do. I haven't read the Pierce
chapter David refers to.)

With that reading, (Stream a) is basically the same proposition as (a)
-- as evidenced by

f x = SHead x (f x)  -- f :: a - Stream a
g (SHead x) = x  -- g :: Stream a - a

We can find more interesting propositions, though. Here's an example
(perhaps not useful, but I find it interesting :-)):

data Foo a b = A a | Fn (Foo a b - b)

We can prove this proposition if we can prove one of these propositions:

a
a - b
(a - b) - b
((a - b) - b) - b
...

Each of these is weaker than the previous one; if x is a proof of
proposition n, then (\f - f x) is a proof of proposition n+1. The
fourth one is a tautology in classical logic, but not in
intuitionistic logic.

- Benja
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: More speed please!

2007-05-11 Thread Josef Svenningsson

On 5/11/07, Simon Peyton-Jones [EMAIL PROTECTED] wrote:

| I'm replying to a rather old thread here, about unboxing in functions. Duncan
| had a continuation monad which passed around some data type that would be nice
| to unbox. You discussed strictness annotations in function types as a 
potential
| solution. I have a different tack on the problem which seems potentially
| useful. I've experimented with doing local defunctionalization on the module.

Interesting suggestion, Josef.  In general, local defunctionalisation would be 
an intersting transformation to try. I'm not sure how well it would scale: the 
larger the scope, the bigger the more distinct functions and the bigger the 
dispatch table.


Indeed the dispatch table could grow big, but I'm not sure it would be
a scalability problem.  Note that all the code that goes in to these
dispatch tables (I call them apply functions) are ripped out from
other places in the program. So there is really no new code being
added, it's only shuffled around.

On the other hand I don't know how GHC deals with large case
expressions and if they are a problem, be it that they can increase
the compilation time or the runtime of the program, then there might
of course be a problem.


Also your transformation is semantically transparent (no effect) whereas Duncan 
is prepared to add ! annotations that really make things stricter, just as ! 
annotations in data type decls do today.  So presumably he will get further 
than you will, because he is making more assumptions.


Indeed. But I think the main advantage for Duncan's approach, over
local defunctionalization, is its general applicability. Local
defunctionalization only kicks in under very special circumstances and
even then isn't always a net win (or so my intuition tells me). The
bang annotations otoh can be inserted wherever you like and would
presumably work transparently across module borders.



Meanwhile, I've thought a bit more about Duncan's idea.  One attractive aspect 
is that you can regard it as a direct extension of Haskell's existing mechanism 
of ! on data types, making the {-# UNPACK #-} pragma look inside function types 
as well as looking inside data types.  I like that. It makes it sounds less ad 
hoc than I previously thought.  I'll open a Trac ticket for this thread, 
http://hackage.haskell.org/trac/ghc/ticket/1349


Sounds good! It would be a cool thing to have. I'm looking forward to
seeing it implemented in GHC :-)

Cheers,

Josef
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: More speed please!

2007-05-01 Thread Josef Svenningsson

I'm replying to a rather old thread here, about unboxing in functions. Duncan
had a continuation monad which passed around some data type that would be nice
to unbox. You discussed strictness annotations in function types as a potential
solution. I have a different tack on the problem which seems potentially
useful. I've experimented with doing local defunctionalization on the module.
This is a long mail as I will try to explain in some detail what it is that I
have done. Please be patient.

Normal defunctionalization is about replacing the primitive function type
a - b with an algebraic data type which I'll call Fun a b. Not all
functions will be eliminated as we will see but the program will be first
order after the transformation. The core of the transformation is that every
lambda in the program gives rise to a new constructor in the Fun data type and
whenever we apply a function we instead call a newly created apply function
with the following type Fun a b - a - b. This is basically what JHC does.

Defunctionalization is normally a whole program transformation (which is why
JHC is a whole program compiler). But sometimes it can be done on a per module
basis. This is where *local* defunctionalization comes in. The key to local
defunctionalization is that we often can divide the data type Fun into several
disjoint data types. We can do this whenever there are several different
function spaces that never get mixed up. And sometimes we're even so lucky
that a function space is totally contained in one module. Then we can do
local defunctionalization of that particular function space only and
completely within that module without changing it's interface. This case often
comes up when using the continuation monad and Duncan's code is not an
exception.

So, I've manually done local defunctionalization on Duncan's code. It gives
rise to two types which I've called Fun1 and Fun2. They look like follows
(including the Put monad):

\begin{code}
newtype Put a = Put {
   runPut :: Fun2 a
   }

data Fun1 a where
 Bind :: (a - Put b) - Fun1 b - Fun1 a
 Then :: Put b  - Fun1 b - Fun1 a
 Run  :: Fun1 ()
 FlushOld :: !(Fun1 ()) - !Int - !(ForeignPtr Word8) - !Int - !Int
   - Fun1 ()

data Fun2 a where
 Return :: a - Fun2 a
 Bind2  :: Put a - (a - Put b) - Fun2 b
 Then2  :: Put a - Put b - Fun2 b
 Flush  :: Fun2 ()
 Write  :: !Int - (Ptr Word8 - IO ()) - Fun2 ()
\end{code}
Intuitively every constructor corresponds to a closure. I've chosen the name
for the constructor based on which function the closure appears in.

The respective apply functions for these data types acts as interpreters and
executes the corresponding code for each constructor/closure. Their type look
as follow:

\begin{code}
apply1 :: Fun1 a - a - Buffer - [B.ByteString]
apply2 :: Fun2 a - Fun1 a - Buffer - [B.ByteString]
\end{code}

Now, the cool thing is that once GHC starts optimizing away on these apply
functions they will be unboxed and no Buffer will ever be created or passed
around. Here is the core type for apply1:
\begin{core}
$wapply1_r21p :: forall a_aQu.
 PutMonad.Fun1 a_aQu
 - a_aQu
 - GHC.Prim.Addr#
 - GHC.ForeignPtr.ForeignPtrContents
 - GHC.Prim.Int#
 - GHC.Prim.Int#
 - GHC.Prim.Int#
 - [Data.ByteString.Base.ByteString]
\end{core}
This is exactly what Duncan wanted, right? I declare victory :-)

However, things are not all roses. There are some functions that will
not be unboxed as we hope for with this approach, for instance the function
flushOld (see Duncan's code). To achieve the best possible optimization I
think one would have to perform strictness analysis and the worker-wrapper
transformation twice, once before doing local defunctionalization and then
again on the apply functions generated by the defunctionalization process.
This should give the code that Duncan wants I believe.

I think it should be relatively straightforward to implement local
defunctionalization in GHC but it should not be turned on by default as the
number of modules where it is beneficial is rather few.

The complete defunctionalized version of Duncan's module is attached.

I'm sure there are a lot of things that are somewhat unclear in this message.
Feel free to ask and I'll do my best to clarify.

Cheers,

Josef
{-# OPTIONS -fglasgow-exts -fbang-patterns -cpp #-}

module PutMonad (
-- * The Put type
  Put
, run -- :: Put () - L.ByteString

-- * Flushing the implicit parse state
, flush   -- :: Put ()

-- * Primitives
, write   -- :: Int - (Ptr Word8 - IO ()) - Put ()
, word8   -- :: Word8 - Put ()
  ) where

import Foreign
import qualified Data.ByteString.Base as B (
   ByteString(PS), LazyByteString(LPS),
   inlinePerformIO, mallocByteString, nullForeignPtr)
import qualified Data.ByteString.Lazy as L (ByteString)

-- Our internal buffer 

Re: [Haskell-cafe] Bloom Filter

2007-05-01 Thread Josef Svenningsson

Hi,

Just a small comment on one of the comments.

On 5/1/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:

Also, rather than this:

add :: Bloom a - a - Bloom a

a better argument order is this:

insert :: a - Bloom a - Bloom a

That way, you can use it with foldr.


Hmmm. If you want to create a Bloom using a fold wouldn't it make more
sense to use foldl'? I think the argument order is fine.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is Template Haskell a suitable macro language?

2007-04-24 Thread Josef Svenningsson

On 4/24/07, Jacques Carette [EMAIL PROTECTED] wrote:

In Ocaml, you can frequently use polymorphic variants to get the same
effect.

Which means that if you are willing to do enough type-class-hackery, it
should, in principle, be possible to do the same in Haskell.  But it
sure isn't as convenient!


You seem to imply that there is an encoding of polymorphic variants in
Haskell using type classes. While I know that it's possible to achieve
similar effects using type classes I haven't seen a direct encoding.
If there is such an encoding I would be very interested to hear about
it.


This all points to some clear need for more ``flavours'' of polymorphism
being needed (in Haskell), to be able to express *in the type system*
what TH allows you to say outside.


I totally agree with this.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [C2hs] anyone interested in developing a Language.C library?

2007-04-21 Thread Josef Svenningsson

On 4/21/07, Donald Bruce Stewart [EMAIL PROTECTED] wrote:

chak:
 Duncan Coutts wrote:
 If anyone is interested in developing a Language.C library, I've just
 completed a full C parser which we're using in c2hs.
 
 It covers all of C99 and all of the GNU C extensions that I have found
 used in practise, including the __attribute__ annotations. It can
 successfully parse the whole Linux kernel and all of the C files in all
 the system packages on my Gentoo installation.

 Great work!

 Using this as a basis for a Language.C would be a really worthwile project.


I think people should be very interested in this.

The ability to easily manipulate and generate C would quickly insert
Haskell into another useful niche. There must *surely* be real money
in writing nice Haskell programs that optimise/analyse/refactor/generate
C code...


Unfortunately the niche is not empty. There is an ocaml library called
cil which is supposed to be pretty sweet for manipulating C code. But
I still think a Haskell library would be a very good idea, and perhaps
one can look at cil for inspiration.

cil can be found here:
http://hal.cs.berkeley.edu/cil/

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] First order Haskell without Data

2007-04-19 Thread Josef Svenningsson

Hi,

Just a comment or two on the implications of converting higher-order
functions to data.

The paper you reference about this uses the method of
defunctionalization. This is a whole program transformation and might
therefore not be suitable in a compiler such as GHC or YHC. On the
other hand, it is more or less exactly what JHC uses

If you want to support separate compilation you should go for more
conventional closure conversion. This is in essence what every
compiler which compiles a higher order language does. But if you want
to have a typed language which can express this transformation you
will be needing quite a sophisticated type system.
http://www.cs.cmu.edu/~rwh/papers/closures/popl96.pdf

Just my 2 öre.

Josef

On 4/19/07, Neil Mitchell [EMAIL PROTECTED] wrote:

Hi,

I've been wondering what is essential to Haskell and what isn't. Not
from a point of view of what could be removed from the language, but
what could be removed from a Core language.

Given the features of higher-order, laziness and data types:

Laziness can be converted to higher-order functions

Data types can be converted to higher-order functions

Higher-order functions can be converted to Data


So as far as I can see it we have left:
{data-types only}
{higher-order functions only}

But we don't have laziness only, so my question is if it is possible
to represent all of Haskell in a first-order language without data
types, but with laziness. If its possible to do so in a strict
first-order language without data types, then I'd also be interested.

Are any of these known results?

Thanks

Neil

References:

* data types - higher order is in Efficient Interpretation by
Transforming Data Types and Patterns to Functions
http://www.cs.nott.ac.uk/~nhn/TFP2006/Papers/03-JansenKoopmanPlasmeijer-EfficientInterpretation.pdf

* laziness - functions, functions - data is in Definitional
Interpreters for Higher-Order Programming Languages
http://www.brics.dk/~hosc/local/HOSC-11-4-pp363-397.pdf
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] idea for avoiding temporaries

2007-03-09 Thread Josef Svenningsson

On 3/8/07, John Meacham [EMAIL PROTECTED] wrote:

it seems we can almost do this now without adding any new API calls,
just have 'thawArray' and 'freezeArray' perform the check, and behave
like 'unsafeThawArray' or 'unsafeFreezeArray' when they are the only
reference.

The compiler may even be able to statically replace some
calls to thawArray or freezeArray with the unsafe versions with an
extension of the rules syntax

{-# RULES forall a | unique a . freezeArray a = unsafeFreezeArray a #-}

this syntax actually came up in a previous discussion about wanting to
fire rules only when the argument is a constant literal (though, you don't
care about the particular constant value it is)

I imagine infering the uniqueness of values shouldn't be that hard as a
form of it is already done for avoiding thunk-updates.


You have to be careful here. Uniqueness typing is not the same as
usage analysis but the two are confusingly similar. Imagine a function
which takes, say, an array as it's argument. If it says that the array
is unique, then there must only be a single reference to that array as
the function probably updates it destructively. Compare this to the
situation where we say that the array is only used once in the
function. The array may have other references to it in this case, the
function doesn't care.
This boils down to the fact that the usage analysis propagates
information to the point where a value is created; should the thunk be
updateable or not? Whereas with uniqueness analysis we propagate
information to the point where it is used: is it OK to destructively
update the value instead of copying it?

I'm also not sure that inferring uniqueness types in the compiler
would be the way to go. I think David wants to be certain that his
arrays are not copied. Having to inspect the output of the compiler is
not the nicest way to do this. Some mechanism which enables the
programmer to tell the compiler what his intent is (such as the
uniqueness type system a la Clean) would be much preferable.

Of course, that doesn't mean that your idea is useless. It could
still, and probably would be, a very valuable optimization in the
compiler. I just don't think that it will help David with his
particular problem.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] seq (was: Article review: Category Theory)

2007-01-19 Thread Josef Svenningsson

On 1/20/07, Brian Hulley [EMAIL PROTECTED] wrote:

Neil Mitchell wrote:
 Hi Brian,
 Is there any solution that would allow excess laziness to be removed
 from a Haskell program such that Hask would be a category?

 class Seq a where
seq :: a - b - b

 Then you have a different seq based on the types, and it doesn't go
 wrong. You would probably want deriving Seq support.

This seems an amazingly neat solution to a really terrible problem, so:

1) Does anyone know why this was not used in the first place?


It *was* used before. See section 6.2.7 of the Haskell 1.4 report. It
was throw out in Haskell98. I don't remember why though.


2) Would it be good to use this in future versions of Haskell?

3) Is there any practical program which requires the current seq that could
not be rewritten to use the typeclass seq?


I'll pass on these two questions.

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[2]: [Haskell-cafe] State monad strictness - how?

2007-01-11 Thread Josef Svenningsson

On 1/11/07, Yitzchak Gale [EMAIL PROTECTED] wrote:

Josef Svenningsson wrote:
 Take the state monad for example. Should it be
 strict or lazy in the state that it carries
 around? What about the value component?
 ...both strict and lazy variants are useful.

I wrote:
 Are those really needed?

 ...it wouldn't be very convenient, would it?
 Sometimes I find that I want strict state by
 default and then I don't want to sprinkle my
 code with seqs.

I don't think that is so inconvenient. Why do we
need to define getStrict, putStrict, getsStrict,
etc., when it is perhaps even more clear to write
get $!, put $!., (gets . ($!)), etc.?.

The same goes for Iavor's monad library.


Indeed. I'm embarrassed that I've never realized this before. I
suppose I though the tuple solution was so elegant that I never
realized there was a simpler solution at hand.


 Now, the challenge here is to design a library
 which doesn't explode in size from all the
 various possibilities for strictness or
 laziness.

I am now pretty convinced that the only thing we
need is two versions of each monad, varying only
the strictness of =.

Then, of course, we will need runFoo for each, and
evalFoo and execFoo for each state monad.

And adaptors that allow you to run a lazy
calculation inside a strict one and vice-versa. So
we need an asStrict function and an asLazy
function for each lazy/strict pair of monads.

I think that is all we need. Not too bad.

I am very impressed that we get most of that
almost for free in Iavor's library.


Yes, it seems quite feasible.


 http://www.cs.chalmers.se/~josefs/monadlib/
 ...instantiating this with different pair types
 with different strictness properties will give
 us total control over strictness for state and
 value.

Hmm. Your current implementation doesn't seem to
do it that way. You use tuples for both the lazy
version and the strict version, and each defines
its own Monad instance for all Pair types. So it
is impossible to use both in the same module, even
with hiding.


The way I enable laziness in a strict monad and vice versa is to use a
non-standard bind operator, strictBind or lazyBind. But that's not
really scalable. The whole architecture that I used in my library
isn't really all that good. The reason I came up with it was to solve
a completely different problem which doesn't really apply to this
library anyway. The library design you outline above is indeed the way
to go.


I tried to work on this a little. I defined a
strict Pair type and tried to find a single Monad
instance that will give the right strictness for
both if you just vary between lazy and strict
pairs.

We need that both of the following converge
in constant stack space:

take 100 $ evalState (repeatM $ modify (+1)) 0
execStateStrict (replicateM_ 10 $ modify (+1)) 0

(You can verify that this is true if you use the
standard evalState, and replace execStateStrict
with runIdentity . execStateT.)

I was unable to hit upon the right combination of
seqs in the Monad instance. Is it really possible?

Of course, you could use a newtype of tuples and
define separate Monad instances. But then we are
not gaining anything over just defining the lazy
and strict monads directly.


I'm not sure exactly what you're trying to achieve here. If the tuple
type you have is strict in both components then you're never going to
get these examples to work. However, if you choose the lazy state
monad and choose tuples carefully then both example can be made to
terminate. Here's an example:
ex1 = take 100 $ evalState
 ((repeatM $ modify (+1))::StateP StrictLeft Int [()])
 0
ex2 = execStateStrict
 ((replicateM_ 10 $ modify (+1)) :: StateTP StrictLeft Int Identity ())
 0

The first example also terminates with the all lazy pair (,), the
importance is the laziness of the right component.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[2]: [Haskell-cafe] State monad strictness - how?

2007-01-10 Thread Josef Svenningsson

Yitzchak,

I agree with you that both lazy and strict monads are important and
that we should have both options in a monad library.

But the fun doesn't end there. There are other strictness properties
to consider. Take the state monad for example. Should it be strict or
lazy in the state that it carries around? What about the value
component? I think the answer to these questions are the same as for
monadic strictness above: both strict and lazy variants are useful.

Now, the challenge here is to design a library which doesn't explode
in size from all the various possibilities for strictness or laziness.
In fact I did once bite the bullet and came up with a library that
does all this. Though I haven't released it publicly yet. I never took
the time to polish the code to the point where I wouldn't be
embarrassed about showing it to others.

If you kick me hard enough I might release the library.

Cheers,

Josef

On 1/10/07, Yitzchak Gale [EMAIL PROTECTED] wrote:

Hi Bulat,

I wrote:
 [State and StateT] should be consistent. I
 would much prefer for them both to be lazy.

Bulat Ziganshin wrote:
 imho, lazy monads (as any other lazy things) is a source of beginner's
 confusion. therefore it may be better to provide default monads as strict
 and lazy ones - for one who knows what he wants - with a Lazy prefix, e.g.
 LazyST, LazyState...

Well, as long as both are provided, that is fine with me.

But I do not think that laziness in monad methods is a reason
for beginners' confusion.

First of all, it is natural to get a little confused about strictness
at the beginning. I'm not sure it happens more often with
monads than anywhere else.

If there is more confusion about strictness with monads,
it is because of the fact that many introductions/tutorials
confuse all monads with IO. They say something like:

So how do you create side effects in the real world? That is
what monads are for.

No, no, no! That is what ** IO ** is for. Most monads are pure!

In fact, I think making the default strict will create more confusion.

We should help beginners to understand right from the start that
do-notation is not a procedure of commands for the computer
to carry out. It is just a special syntax for defining functions. We
use it when it is more natural to describe the effect of a function
in a step-by-step style, just as happens sometimes in mathematics.
But the compiler is under no obligation to follow our steps literally.

Except with IO - when dealing with the real world, we need
to be able to specify the exact order in which things happen.

ST represents using physical memory as a fast storage device.
So it is really IO in disguise.

Regards,
Yitz
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[2]: [Haskell-cafe] State monad strictness - how?

2007-01-10 Thread Josef Svenningsson

On 1/10/07, Yitzchak Gale [EMAIL PROTECTED] wrote:

Hi Josef,

Josef Svenningsson wrote:
 ...the fun doesn't end there. There are other strictness properties
 to consider.

Could be. But after using mtl heavily for a few years now,
I find that in practice the only one where have felt the need
for control over strictness is =, like Dean's example.


Yes. For most uses this finer control of strictness is just overkill.
But in the rare cases when you really need this tweakability then it's
a royal pain if you don't have it.


 Take the state monad for example. Should it be strict or
 lazy in the state that it carries around? What about the value
 component? I think the answer to these questions are the same as for
 monadic strictness above: both strict and lazy variants are useful.

Are those really needed? Can't the strictness of the state be
fully controlled by seq with runState, get, and put, and by
choosing lazy or strict =? And similarly with value?


Yes, you're right. But it wouldn't be very convenient, would it?
Sometimes I find that I want strict state by default and then I don't
want to sprinkle my code with seqs. Furthermore this extra level of
control is not that difficult to implement in a library.


 Now, the challenge here is to design a library which doesn't explode
 in size from all the various possibilities for strictness or laziness.

The same challenge exists in many of the Data.* libraries.
I think this is very important.


Indeed.


 In fact I did once bite the bullet and came up with a library that
 does all this. Though I haven't released it publicly yet. I never took
 the time to polish the code to the point where I wouldn't be
 embarrassed about showing it to others.
 If you kick me hard enough I might release the library.

My boot is not long enough :). But I would love to see
what you did.


:-) Ok, I've put some files under the following url:
http://www.cs.chalmers.se/~josefs/monadlib/

It might need some explanation since I use the module system quite
heavily. For a monad such as the state monad the hierarchy looks like
this:
* Control.Monad.State.Base contains the type declaration and basic
 functionality, but NOT instances of the monad class.
 This module is not exposed.
* Control.Monad.State.Lazy
* Control.Monad.State.Strict
 Contains instances for monad classes.
* Control.Monad.State is supposed to be the default and just reexports
 Control.Monad.State.Strict.

Furthermore, again taking the state monad as example, the monad is
parameterized on the type of pair used in the definition of the monad.
So instead of:
newtype State s a = State { runState :: (s - (a, s)) }
we have:
newtype StateP p s a = StateP { runStateP :: (s - p a s) }

Now, instantiating this with different pair types with different
strictness properties will give us total control over strictness for
state and value. Data.Pair provides various pair for this purpose.

Enjoy,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: TArray

2006-09-20 Thread Josef Svenningsson

Hi,

I get the exact same thing with ghc-6.5.20060914.

Weird.

Josef

On 9/20/06, Bulat Ziganshin [EMAIL PROTECTED] wrote:

Hello glasgow-haskell-users,

can someone try to compile this one-line module:

import Control.Concurrent.STM.TArray

with a recent 6.5 builds, preferably mingw32 ones?

it doesn't work for me, although TVar and other modules import
without any problems; and i see TArray.hi module along with TVar.hi
and so on. the message is:

Failed to load interface for `Control.Concurrent.STM.TArray':
  locations searched:
Control/Concurrent/STM/TArray.hi
Control/Concurrent/STM/TArray.hi-boot

compiler is, again,
http://www.haskell.org/ghc/dist/current/dist/ghc-6.5.20060901-i386-unknown-mingw32.tar.gz


--
Best regards,
 Bulat  mailto:[EMAIL PROTECTED]

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Traversing a graph in STM

2006-09-19 Thread Josef Svenningsson

On 9/19/06, Jan-Willem Maessen [EMAIL PROTECTED] wrote:


On Sep 18, 2006, at 4:47 AM, Einar Karttunen wrote:

 On 18.09 01:23, Josef Svenningsson wrote:
 On 9/17/06, Jan-Willem Maessen [EMAIL PROTECTED] wrote:
 You can associate a unique name with each traversal, and store a set
 of traversals at each node (instead of a mark bit).  But this set
 grows without bound unless you follow each traversal with a
 cleaning
 traversal which removes a traversal tag from the set.  And you'd
 need some way to generate the unique names.

 Well, if your set implementation used weak pointers there would be no
 need for a cleaning traversal. The garbage collector will take
 care of
 that. The only slightly tricky thing is to keep a live pointer to the
 unique traversal name during the entire of the traversal. But I don't
 think that should be a big problem.


This just amounts to saying we can use the GC to implement the
cleanup traversal on our behalf.

Indeed.


I'd be quite surprised if this
were actually more efficient.

It is a lot more efficient in the sense that the GC is already
written. We don't have to implement a cleanup traversal ourselves.


But this is all a bit moot, as Einar
observes:

 This suffers from the problem that two traversals reading the
 same parts of the graph would have a good chance to make each other
 retry.

Any solution which stores traversal states in the nodes has this
problem.  Fundamentally you can't  update the state of graph nodes in
any way using STM and expect to run multiple traversals concurrently
over the same subgraph.


Alas, yes.

All the best,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Traversing a graph in STM

2006-09-17 Thread Josef Svenningsson

On 9/17/06, Jan-Willem Maessen [EMAIL PROTECTED] wrote:


On Sep 13, 2006, at 3:37 AM, Einar Karttunen wrote:

 Hello

 Is there an elegant way of traversing a directed graph in STM?

 type Node  nt et = TVar (NodeT nt et)
 type Edge  et= TVar et
 data NodeT nt et = NodeT nt [(Node nt et, Edge et)]

 type MyGraph = Node String Int

 When implementing a simple depth first search we need a way to
 mark nodes (= TVars) as visited. In addition multiple concurrent
 searches should be possible.

 Is it possible to avoid passing around an explicit Set of visited
 nodes?

You can associate a unique name with each traversal, and store a set
of traversals at each node (instead of a mark bit).  But this set
grows without bound unless you follow each traversal with a cleaning
traversal which removes a traversal tag from the set.  And you'd
need some way to generate the unique names.


Well, if your set implementation used weak pointers there would be no
need for a cleaning traversal. The garbage collector will take care of
that. The only slightly tricky thing is to keep a live pointer to the
unique traversal name during the entire of the traversal. But I don't
think that should be a big problem.

Cheers,

Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Variants of a recursive data structure

2006-08-03 Thread Josef Svenningsson

Klaus,

You've gotten many fine answers to your question. I have yet another
one which is believe is closest to what you had in mind. The key to
the solution is to add an extra type parameter to Labelled like so:

data Labelled f a = L String (f a)

Now you can use it to form new recursive type with Mu:

type LabelledExp = Mu (Labelled Exp)

And it is also possible to write an unlabel function which knows very
little about the structure of Exp:

unlabel :: Functor f = Mu (Labelled f) - Mu f
unlabel (Mu (L _ r)) = Mu (fmap unlabel r)

Another bonus is that it's all Haskell98.

The name I came up with for the trick of adding a higher-kinded type
parameter to Labelled is Functor Transformer. Transformer -
because it transforms the type it is applied to (in this case Exp),
and Functor - because when using Mu to tie the recursive knot one
often require the types to be instances of functor, as I'm sure you're
aware of.

Good luck with whatever it is you need this for.

Josef

On 8/3/06, Klaus Ostermann [EMAIL PROTECTED] wrote:

Hi all,

I have a problem which is probably not a problem at all for Haskell experts,
but I am struggling with it nevertheless.

I want to model the following situation. I have ASTs for a language in two
variatns: A simple form and a labelled form, e.g.

data SimpleExp = Num Int | Add SimpleExp SimpleExp

data LabelledExp = LNum Int String | LAdd LabelledExp LabelledExp String

I wonder what would be the best way to model this situation without
repeating the structure of the AST.

I tried it using a fixed point operator for types like this:

data Exp e = Num Int | Add e e

data Labelled a = L String a

newtype Mu f = Mu (f (Mu f))

type SimpleExp = Mu Exp

type LabelledExp = Mu Labelled Exp

The SimpleExp definition works fine, but the LabeledExp definition doesn't
because I would need something like Mu (\a - Labeled (Exp a)) where \
is a type-level lambda.

However, I don't know how to do this in Haskell. I'd need something like the
. operator on the type-level. I also wonder whether it is possible to
curry type constructors.

The icing on the cake would be if it would also be possible to have a
function

unlabel :: LabeledExp - Exp

that does *not* need to know about the full structure of expressions.

So, what options do I have to address this problem in Haskell?

Klaus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Code review: efficiency question

2006-05-03 Thread Josef Svenningsson

Brian,

You might also want to take a look at the list fusion functionality in
GHC which often can help optimize your programs when programming with
lists.
http://www.haskell.org/ghc/docs/latest/html/users_guide/rewrite-rules.html#id3153234
It doesn't help in your particular program but it might be usable for
you in the future.

Cheers,

/Josef

On 5/3/06, Brian Hulley [EMAIL PROTECTED] wrote:

Bulat Ziganshin wrote:
 [ideas including reverseMapM_]
 you will laugh, but speed of your two solutions depends on so many
 factors (including size of CPU cache) that noone can say that is
 better in general. although for small lists reverseMapM_ should be
 faster than reverse+mapM. what will be faster - using of higher-order
 function or direct recursion, i can't say, it's a really
 counter-intuitive area of ghc optimizer :)

 of course, i don't think that all that really matters for your program
 (drawing should anyway need much more time than looping). just use
 higher-level approach (that makes code simpler to write, understand
 and maintain) and don't bother your mind :)

Hi Bulat!
Thanks for the suggestions about reverseMapM_ etc.
It seems that since the speeds of the two solutions can be relatively
faster/slower on different platforms/CPUs I might as well just use the
combination of existing functions mapM_ and reverse at the moment to get
readable code with the least amount of effort :-)

Best regards, Brian.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] show for functional types

2006-04-05 Thread Josef Svenningsson
On 4/5/06, Robert Dockins [EMAIL PROTECTED] wrote:
 Hey, if we wanted a private conversation, we'd take it off-list. :-)

:-)

  Do you have any reference to the fact that there is any diagreement
  about the term? I know it has been used sloppily at times but I think
  it is pretty well defined. See section 4 of the following paper:
  http://www.dina.kvl.dk/~sestoft/papers/SondergaardSestoft1990.pdf

 http://en.wikipedia.org/wiki/Referential_transparency
 http://lambda-the-ultimate.org/node/1237

 It may be that experts have a well defined notion of what it means,
 but I think that non-experts (ie, most people) have a pretty vague
 idea of exactly what it means.

The wikipedia article was very enlightening on this subject,
especially the disclaimer on the top :-) Thanks!

  So it's a standard substitutivity property. The only problem here is
  that Haskell has a pretty hairy denotational semantics and I don't
  think anyone has spelled it out in detail.

 I think that may be the main problem, at least in this context.  We
 can take a nice lovely definition of rt, as above, but its
 meaningless without a good semantic model.  So we're forced to
 approximate and hand-wave, which is where the disagreement comes in.

Yes, I know what you mean. In the last few weeks this legacy of
hand-waving has been showing its ugly face on the haskell-prime list
as well. Maybe its time to sit down and define properly what we mean
in certain contexts. It seems we would need more than one semantics to
cover the different ways that reason about Haskell program. Just so
that one can say: Here I'll be using Fast'n-Loose Reasoning(TM) or
This law holds in the Handwaving Semantics(TM). The point is to be
able to reference these precisely defined approximations and thus
enjoying being both sloppy and precise at the same time.
But it's real work doing this kind of thing. During a graduate course
at Chalmers we started at something like this but it grew pretty hairy
pretty quickly.

Cheers,

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Alternatives to . for composition

2006-03-27 Thread Josef Svenningsson
FYI, Cayenne used the center dot as composition. See the System$HO module.
http://www.cs.chalmers.se/~augustss/cayenne/system.html
I remember liking it but I think the ring operator would be closer to
mathematics notation and indeed the best choice.

Cheers,

/Josef

On 3/25/06, Dylan Thurston [EMAIL PROTECTED] wrote:
 At http://hackage.haskell.org/trac/haskell-prime/wiki/CompositionAsDot ,
 there is a list of possible Unicode replacements for the '.'
 operator.  Oddly, the canonical one is missing (from
 http://www.unicode.org/charts/PDF/U2200.pdf ):

 2218  RING OPERATOR
   = composite function
   = APL jot
 00B0 degree sign
 25E6 white bullet

 I don't think any other Unicode character should be considered.

 (Is this the approved way to send minor updates like this?)

 Peace,
 Dylan Thurston


 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.1 (GNU/Linux)

 iD8DBQFEJXsQVeybfhaa3tcRApNiAJ9eSfuIgaRkbJaOle1IG5AmzWoOfACdH9U1
 Vh/63jQ4c0Rft041WGEbut8=
 =HF0S
 -END PGP SIGNATURE-


 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://haskell.org/mailman/listinfo/haskell-prime



___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] multiple computations, same input

2006-03-27 Thread Josef Svenningsson
On 3/28/06, Neil Mitchell [EMAIL PROTECTED] wrote:
  This feels like a situation Parsec users would find themselves in all the
  time.  When you have a bunch of parsers in a 'choice', does the start of the
  input stream linger until the last parser is executed?

 No, as soon as one token is accepted from any parser, that input is
 decided upon, and it will never go back. If you want that behaviour
 you have to wrap the particular parser in try, which does give the
 backtracking (and space leak)

I personally find this behaviour terribly confusing. It makes writing
the parser highly unmodular. It forces me to know exactly what a
certain parser recognizes to know whether I need to wrap a 'try'
around it when I compose it in parallel with another parser. Which is
why I much prefer to use parsing libraries based on Koen Claessen's
technique which performs all parses at the same time. It works
breadth-first on the parse forest (the set of parse trees).
Text.ParserCombinators.ReadP is one example which uses this technique.

Cheers,

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Strict tuples

2006-03-21 Thread Josef Svenningsson
On 3/21/06, Simon Marlow [EMAIL PROTECTED] wrote:
By all means have strict tuples in a library somewhere.They don't needto have special syntax.I have a module Data.Pair which provides pairs with different strictness properties. Perhaps it can be used as a startingpoint.
Cheers,/Josef
-
-- |
-- Module  :  Data.Pair
-- Copyright   :  (c) Josef Svenningsson 2005
-- License :  BSD-style
--
-- Maintainer  :  [EMAIL PROTECTED]
-- Stability   :  experimental
-- Portability :  portable
--
-- Several pair data types with different strictness properties
--
--
module Data.Pair ( Pair(..),
		   StrictLeft(..),
		   StrictRight(..),
		   StrictPair(..)
		  ) where

-- |A class for pairs. We need this to have a consistent interface for
--  several different pair types with different strictness properties.
--  Minimal complete instances are either @first@, @second@ and @pair@
--  or @casePair@ and @[EMAIL PROTECTED]
class Pair p where
  first:: p a b - a
  first p  = casePair (\a _ - a)
  second   :: p a b - b
  second p = casePair (\_ b - b)
  casePair :: (a - b - c) - p a b - c
  casePair c p = c (first p) (second p)
  pair :: a - b - p a b

propPair p = p == pair (first p) (second p)

data StrictLeft  a b = StrictLeft !a  b
data StrictRight a b = StrictRight a !b
data StrictPair  a b = StrictPair !a !b

instance Pair (,) where
  first  (f,_) = f
  second (_,s) = s
  pair f s = (f,s)

instance Pair StrictLeft where
  first  (StrictLeft f _) = f
  second (StrictLeft _ s) = s
  pair f s = StrictLeft f s

instance Pair StrictRight where
  first  (StrictRight f _) = f
  second (StrictRight _ s) = s
  pair f s = StrictRight f s

instance Pair StrictPair where
  first  (StrictPair f _) = f
  second (StrictPair _ s) = s
  pair f s = StrictPair f s
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] Looking for an efficient tree in STM

2006-03-18 Thread Josef Svenningsson
Sorry for the slow reply,On 3/8/06, Einar Karttunen ekarttun@cs.helsinki.fi wrote:
Does anyone have an efficient tree implemented in STM thatsupports concurrent updates in an efficient fashion? Thisseems suprisingly hard to implement - a normal binarytree with links as TVar is very slow and does not scale
very well.I'm just wondering whether it is important for you that all the internal links are TVars. That of course depends on what kind of API you're imagining for your data structure. I would imagine though that a single TVar pointing to the root of your favourite functional tree implementation will be quite sufficient. My guess is it would also be the fastest way of implementing it.
Cheers,/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: The worst piece of syntax in Haskell

2006-02-21 Thread Josef Svenningsson
On 2/22/06, Claus Reinke [EMAIL PROTECTED] wrote:
 class Monad m= MonadPlus mwhere ... class Ord a= Ix awhere ... instance Integral a= Eq (Ratio a)where ...
still difficult?-) works just as well when the constraint lists get longer.This is the style I've adopted and it makes things a little better but not much. I still found it difficult to browse through my library even with this kind of layout.
ps. I like that its the same way as for type signatures.
Well, it's good that the class contraint syntax for type signatures is consistent with that of class and instance declarations. But it is still the wrong syntax.Cheers,/Josef
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Comment Syntax

2006-02-02 Thread Josef Svenningsson
On 2/2/06, Henrik Nilsson [EMAIL PROTECTED] wrote:
Hi all,To corroborate Wadler's law further.:-) Josef wrote:
  Oh yes, it does happen that a single line comment begins with a  special symbol. It has happened to me on several occations when using  haddock annotation to my source code. It is all to easy to forget that
  extra space. With incomprehensible error messages as a result.But might that not just mean that the error messages ought to beimproved?I don't know how hard that would be, but after having played around
a bit with GHC, the messages I get are either of the typeparse error on input '--|' or of the type Not in scope: `--'(followed by lots of other stuff not being in scope etc).
If this really is a big problem for beginners, it would not seemtotally infeasible to add some special code that helpfully suggeststhat a space perhaps ought to be inserted?Or have you seen significantly worse error messages?
My point here was not that the error messages was that terrible. I just wanted to point out to Manuel that it does happen that single line comments start with a symbol. Which makes the current comment syntax somewhat awkward.
Cheers,/Josef
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: Comment Syntax

2006-02-01 Thread Josef Svenningsson
On 2/2/06, John Meacham [EMAIL PROTECTED] wrote:
On Thu, Feb 02, 2006 at 02:31:32AM +0100, Josef Svenningsson wrote: I still think there is an inconsistency here. And it has to do with maximal munch lexing. Maximal munch is what we normally expect from a lexer for a
 programming language. But the way comments work at the moment breaks maximal munch. The longest possible read is to read the whole line as a comment and not interpret for instance --^ as an operator. It breaks any programmers'
 intuition not only beginners'. I still get it wrong from time to time.huh? this is exactly the opposite. maximal munch means that it wouldconsume everything and then interpret it as an operator. having it the
other way would be a special case because you would have to stopconsuming input after the first --.I new this response were coming It basically comes down to how one interprets the maximal munch. I know there are plenty of people who agree with you. But there are those that agree with my standpoint as well. I'm not going to propose that we start arguing about this. I suppose we'll have to use other arguments to persuade each other about the comment syntax.
/Josef
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] Expanding do notation

2006-01-07 Thread Josef Svenningsson
On 1/7/06, Chris Kuklewicz [EMAIL PROTECTED] wrote:
When you put print (head p) at then end, it keeps a reference to thewhole list p which is your space leak.If you want to store the headof p, this *should* work: main = do n - getArgs = return . read . head
 let p = permutations [1..n] headOfP - return (head p) mapM_ (putStrLn . concatMap show) $ take 30 p putStr $ Pfannkuchen( ++ show n ++ ) = 
 putStrLn . show $ foldl' (flip (max . steps 0)) 0 p print headOfPThe headOfP is computed as an IO action at that point in the program,so headOfP does not hold a reference to p as a whole.
Without having tried this I'd say that you should use Control.Excection.evaluate instead of 'return'. Or you could use 'seq'. I suspect that otherwise the 'head p' will not be evaluated until the last line and you will have the same problem as before.
Cheers,/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] why don't we have const Ptrs?

2005-11-02 Thread Josef Svenningsson
Hi,

Here's a way to do pretty much what you're after. The idea is to add an
extra parameter to the Ptr type to indicate if it is a const pointer or
not.

 data Ptr const a

To indicate the constness we create a dummy data type which will show when the pointer type is *not* const.

 data NotConst

Now we can give more refined types to peek and poke like so:

 peek :: Ptr const a - IO a
 poke :: Ptr NotConst a - a - IO ()

With this setup peek will work with both kinds of pointers without any
casting. Constness will also be inferred. If a function is polymorphic
in a const argument that means that it doesn't change to pointer. The
use of higher rank polymorphism can be used to enforce that a pointer
is const.

This way of doing it is perhaps not the most beautiful. It would be a
little nicer if we had data kinds as Omega has. And its a shame that we
need to go outside Haskell98 if we want to enforce constness, since we
need to use higher rank polymorphism. Nevertheless this solution
adresses most of issues that you considered.

Cheers,

/JosefOn 11/2/05, David Roundy [EMAIL PROTECTED] wrote:
Hello all,I was thinking this morning as I lay on the floor (where I sleep) aboutstatic typechecking, and how much more wonderful Haskell is than any otherlanguage, when it occurred to me that when working with pointers, Haskell
actually has *less* static typechecking than C or C++.It was a verydisturbing thought, so much so that I was almost compelled to arise earlyto place this question before this learned audience.Why is it that in C++ I can write
void strcpy(char *dest, const char *src);but in Haskell I must import this function as foreign import ccall unsafe static string.h strcpystrcpy :: Ptr CChar - Ptr CChar - IO ()
and lose that wonderful information that the function doesn't modify thecontents of its second argument?One could pretty easily create a ConstPtr type which one could peek into,but not poke to, but then you'd have to explicitely convert a Ptr into a
ConstPtr when passing it as an argument.That feels a bit silly.One could get around this by introducing a class to get around this class ReadablePtr p wherepeek :: p a - IO apeekOff ...
and then make both Ptr and ConstPtr instances of this class, but this stillseems like a very hackish solution.Moreover, I'd like to be able to have const objects quite apart from Ptrs,such as a const Handle, which I can read from, but cannot write to, or a
const IORef--and we wouldn't want to leave out const ForeignPtrs.Ofcourse, even reading affects a Handle's internal state, so one would needto be explicit about what const indicates.But it seems to me that in
the IO world there are a whole slew of things that refer to other thingswhich could all be grouped together.And a const attribute ought to be derived, so that if I create a data
type data FooPtr = FooPtr String (Ptr Foo)one should ideally be able to automatically understand that a const FooPtrholds a const (Ptr Foo).One could go further, at least when dealing with Ptrs, and create a way of
handling restricted pointers--which we could interpret as a const pointerto an object that cannot be changed by anyone else either.One couldsafely create restricted pointers with a function of the type
 mallocRestrictedPtr :: (Ptr a - IO ()) - RestrictedPtr awhich would allow one to ensure at the typechecking level thatRestrictedPtrs point to memory that cannot be modified.There's still some
unstafety involved, in that you could read out of bounds, but you wouldknow that apart from that possibility the contents of a RestrictedPtr trulywill never change.So my question is, how would one implement such an annotation extension?
I'd like to be able to pass a (Ptr a) as a (Const (Ptr a)) without anexplicit typecast, since the Const really isn't changing the type of thepointer, it's just marking it as one that can't be modified.A function
that accepts a (Const (Ptr a)) should also accept a (Restricted (Ptra))--but Restricted pointers are really just pudding, as they only differfrom Const pointers in what optimizations are allowed.On the other hand,
it's not just compiler optimizations that they would allow, but alsouser-code optimizations, which could be much more useful.They also havethe advantage of making certain unsafe functions safe.The hard part seems to be the lack of a conversion.One could quite easily
implement a data Const a = Const a-- this constructor is *not exported* toConst :: x - Const x unsafeAccessConst :: Const x - x peek :: Const (Ptr a) - IO a peekOff ...
etc, and everything would work fine, except that you'd always need toexplicitely convert from Ptr to Const Ptr.Perhaps one could make Const bea class as well as a data type: class (Const a) x where
 toConst :: x - Const a instance (Const x) x where toConst = Const instance (Const x) (Const x) where toConst = idand then one could write code like peek :: Const (cp a) = cp a - IO a
which would move the typecasting burden out of the calling function andinto the 

Re: [Haskell-cafe] Papers from the 2005 Haskell Workshop (Tallinn)?

2005-10-11 Thread Josef Svenningsson
On 10/12/05, John Meacham [EMAIL PROTECTED] wrote:
I certainly think we should somehow centralize an index to papers onhaskell. I have found it extremely difficult to track down papers forauthors that have since moved out of academia or have passed on anddon't have their personal homepages with their papers anymore.

The have been efforts to try to do this before. Here is an example:
http://haskell.readscheme.org/
It seems that this site hasn't been updated properly. But it might make the starting point of a new attempt.

Cheers,

/Josef 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A Tool To Show Functions Relationship?

2005-06-09 Thread Josef Svenningsson
On 6/6/05, Dimitry Golubovsky [EMAIL PROTECTED] wrote:
 Does there exist a tool which given a Haskell source, shows functions
 that are mutually recursive (i. e. call each other, even via calling
 third, etc. functions)? Knowledge of that would help to split the
 module into smaller modules without risk to create recursive modules.
 
When you sent this mail I seemed to recall a simple tool written by
Martin Nordbäck which could take a Haskell module an generate its call
graph. But when I searched the web for it I couldn't find it.
But to my surprise I found it today when wading through the heaps of
old Haskell code that I have around (looking for something completely
different.) I'm attaching it in the hope that you will find it useful.
It may have suffered from slight bit rot but it should be fairly easy
to get it up and running.

Cheers,

/Josef
-- Copyright (C) 2001 Safelogic AB
-- This program is free software; you can redistribute it and/or modify
-- it under the terms of the GNU General Public License as published by
-- the Free Software Foundation; either version 2 of the License, or
-- (at your option) any later version.
-- Last change: 2001-04-27

import HsParser
import HsParseMonad
import HsSyn
import List((\\), nub, partition)
import System
import Char(isAlphaNum)

parse_contents :: String - ParseResult HsModule
parse_contents contents = parse contents (SrcLoc 0 0) 0 []

main :: IO ()
main = do
  args - getArgs
  main' args

main' files = do
  contents - mapM readFile files
  let allPairs = map get_names contents
  let allnames = map fst (concat allPairs)
  putStr digraph call_graph {\n
  let (subgraphs,arrows) = unzip $ map (subgraph allnames) (zip3 (map show [1..]) files allPairs)
  putStr $ unlines $ subgraphs
  putStr $ unlines $ arrows
  putStr }

subgraph allnames (num,name,pairs) =
  let shown = [ (name, filter (\x - x `elem` allnames) vars) 
  | (name, vars) - pairs]
  in (subgraph cluster_ ++ num ++ {\n ++
 label=\ ++ name ++ \;\n ++
 unlines [ show_name name ++ ; | (name,_) - shown ] ++
 }\n,
 unlines (map show_arrows shown))

get_names contents =
  let result = parse_contents contents
  in  case result of
Failed string - error Parse failed: 
Ok _ (HsModule mod exports imports decls) - 
  let pairs = map (get_vars_decl []) decls
  -- The first in the pair is the function name, this function
	  -- returns a list, but there will only be one element in it.
	  pairs' = [(name,vars) | (name:[],vars) - pairs]
  -- combine all names which are doubled
  pairs'' = combine_firsts pairs'
  in pairs''

combine_firsts pairs = case pairs of
  [] - []
  (name, _):_ -
let (same_list, other_list) = partition (\(x,_) - x==name) pairs
in (name, nub (concatMap snd same_list)):combine_firsts other_list

show_arrows :: (HsName, [HsName]) - String
show_arrows (name, calls) = case calls of
  --[] - show_name name ++ ;\n
  _  - unlines [show_name name ++  -  ++ show_name call ++ ; 
| call - calls ]

show_name :: HsName - String
show_name name = case name of
  Qual (Module mod) string - fix_name (mod ++ _ ++ string)
  UnQual string - fix_name string

fix_name :: String - String
fix_name name = \ ++ name ++ \
-- fix_name name = map (\x - if isAlphaNum x || x == '_' then x else '_') name

get_vars_decls :: [HsName] - [HsDecl] - ([HsName], [HsName])
get_vars_decls ignore decls = 
  let (names,vars) = unzip (map (get_vars_decl ignore) decls)
  in (concat names, concat vars)

get_vars_decl :: [HsName] - HsDecl - ([HsName], [HsName])
get_vars_decl ignore decl = case decl of
  HsFunBind _ [HsMatch _ name pats rhs decls] - 
let patvars = concatMap get_vars_pat pats
vars = get_vars_rhs (ignore++patvars) rhs ++ 
   snd (get_vars_decls (ignore++patvars) decls)
in ([name], nub vars) 
  HsPatBind _ pat rhs decls - 
let vars = get_vars_rhs ignore rhs ++
   snd (get_vars_decls (ignore++names) decls)
names = get_vars_pat pat
in (names, nub vars)
  _ - ([],[])

get_vars_rhs :: [HsName] - HsRhs - [HsName]
get_vars_rhs ignore rhs = case rhs of
HsUnGuardedRhs exp - get_vars_exp ignore exp
HsGuardedRhss guardedrhss - 
  concatMap (get_vars_guardedrhs ignore) guardedrhss

get_vars_guardedrhs :: [HsName] - HsGuardedRhs - [HsName]
get_vars_guardedrhs ignore rhs = case rhs of
  HsGuardedRhs _ e1 e2 - get_vars_exps ignore [e1,e2]

get_vars_exps :: [HsName] - [HsExp] - [HsName]
get_vars_exps ignore exps = concatMap (get_vars_exp ignore) exps

get_vars_exp :: [HsName] - HsExp - [HsName]
get_vars_exp ignore exp = case exp of
  HsVar name  - if name `elem` ignore then [] else [name]
  HsInfixApp e1 e2 e3 - get_vars_exps ignore [e1,e2,e3]
  HsApp e1 e2 - get_vars_exps ignore [e1,e2]
  HsNegApp e  - get_vars_exp ignore e
  HsLambda _ e- get_vars_exp ignore e
  HsLet decls e   - 
let (ignores,vars) = get_vars_decls (ignores++ignore) decls
in 

Re: Complexity bug in garbage collector?

2005-04-16 Thread Josef Svenningsson
On 4/14/05, Simon Marlow [EMAIL PROTECTED] wrote:
 On 14 April 2005 15:35, Josef Svenningsson wrote:
 
  I've had some fun chasing around a couple of space leaks lately. One
  of the graphs that I produced looked like this:
  www.cs.chalmers.se/~josefs/coresa.ps
 
  Notice the shape of the graph. It shows a perfect squareroot function.
  But my program should be allocating at a constant rate. From previous
  experience this suggests that there is a time complexity bug in the
  garbage collector. This makes it take time proportional to the square
  of the amount of allocated memory. Can someone confirm this?
 
 The X axis of the heap profile is mutator time: that is runtime
 excluding GC time, so you wouldn't see any non-linear GC effects in the
 shape of the heap profile anyway.  You'll be able to confirm this by
 comparing the time on the profile to the wall-clock time, and checking
 the output from +RTS -sstderr is useful too.
 
 It's possible you're seeing cache effects: as the working set grows
 larger, the program slows down.  The shape does look a bit too perfect
 to be cache effects, though.
 
I don't think the cache has much to do with what I'm seeing. I think
the program is mostly allocating and that is (as far as I remember)
much easier to handle efficiently with the cache than reading.

 I wouldn't rule out any bugs (of course :-), so please send us further
 evidence if you find it.
 
OK, I've cooked up this little program to study the behaviour a little closer:
\begin{code}
module Main where

main = print $ strictId [1..]

strictId list = let (c,c') = work list c'
in c
  where work [] y' = (y',[])
work (x:xs) y' = (v,x:v')
  where (v,v') = work xs y'
\end{code}

This program just allocates like crazy til it dies. The funny looking
strictId function is just the strict identity function on lists. (Yes,
there are simpler ways to achieve the same thing. I just think the
above function is particularly sweet :-)

I do the following:
$ ghc -prof -auto-all --make Main.hs
$ main.exe +RTS -hd -MVERY MUCH

The resulting graph is suspiciously similar in shape to the one of my
previous program. The garbage collector is still my primary suspect, I
simply don't know how to explain the graph otherwise.

/Josef
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Complexity bug in garbage collector?

2005-04-14 Thread Josef Svenningsson
Hi all,

I've had some fun chasing around a couple of space leaks lately. One
of the graphs that I produced looked like this:
www.cs.chalmers.se/~josefs/coresa.ps

Notice the shape of the graph. It shows a perfect squareroot function.
But my program should be allocating at a constant rate. From previous
experience this suggests that there is a time complexity bug in the
garbage collector. This makes it take time proportional to the square
of the amount of allocated memory. Can someone confirm this?

Cheers,

/Josef
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: loop performance bug

2005-03-15 Thread Josef Svenningsson
Hi again,

  A first hint is to never try to guess what kind of code ghc generates.
  If you're in need of performance you need to look at some lower level
  code. I recommend the -fext-core flag to produce external core, a
  sort-of-readable output of ghc's internal representation of the
  program. One can learn a lot from that.
 
 Although when I do I don't necessarily understand what I see! I see
 'let' (ie allocating forms) where I would have hoped there would not be
 but it doesn't help me to figure out why the code I've written ends up
 producing allocations.
 
Reading core programs can indeed be confusing and difficult. It's an
art that has to be mastered. All I can say is that in situations like
this it can be rewarding.

  I'm very much surprised that this program was slower than the one
  without arrays.
 
 You mean the one without lists.
 
Yes :-)

   Since all lists are produced and consumed with good
  producers and consumers (as in section 7.10.3 in the User's Guide)
  they should really go away. This seems to be a real bug.
 
 This is what Simon M thought when I brought this up previously. He
 suggested I use explicit recursion (which I thought I was doing with my
 second version though it doesn't do much better!)
 
And I suggest Simon M tries to fix ghc so that you don't have to write
the program using explicit recursion :-)

 I was hoping that if the array reading was inlined that the calculation
 of the index would also be inlined and so the Pos constructor need never
 be allocated.
 
I think this is a fair assumption. But apparently ghc doesn't remove
the overhead. Maybe ghc's optimizer needs to be beefed up.

 BTW have you thought about picking up hIDE development again? I might go
 back to it after Gtk2Hs hits version 1.0
 
You're confusing me with Jonas Svensson. I don't blame you, it's an
easy mistake to do if you're not a swede. So the answer is no, I'm not
going to develop hIDE any further :-) I should mention though that
there is a group of students here at Chalmers working on a Haskell
IDE. I don't know the status of their project though.

/Josef
___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: loop performance bug

2005-03-14 Thread Josef Svenningsson
Hi,

On Mon, 14 Mar 2005 13:41:06 +, Duncan Coutts
[EMAIL PROTECTED] wrote:
 This sort of code runs very slowly when compared to the equivalent in C:
 
I'm afraid the kind of array processing code you posted will always be
slower than the C equivalent. Haskell only has safe array meaning
that every operation checks the array bounds and that's going to cost
you in situations like these. But I do agree that the timings you
showed are overly bad. Let's see what we can do about it.

A first hint is to never try to guess what kind of code ghc generates.
If you're in need of performance you need to look at some lower level
code. I recommend the -fext-core flag to produce external core, a
sort-of-readable output of ghc's internal representation of the
program. One can learn a lot from that.

  {-# OPTIONS -fglasgow-exts #-}
  module Main where
 
  import Data.Array.MArray
  import Data.Array.IO
 
  data Pos = Pos !Int !Int
deriving (Eq, Ord, Ix)
 
  main = test1
 
  test1 :: IO ()
  test1 = do
(arr :: IOUArray Pos Bool) - newArray_ (Pos 0 0, Pos 99 99)
sequence_ [ sequence_ [ writeArray arr (Pos y x) False
  | y - [0..99]
  , x - [0..99] ]
  | _ - [0..] ]
 
 
 Timing:
 
 $ ghc --make -O2 SpeedTest.hs -o speedtest-hs
 $ time ./speedtest-hs
 
 real0m10.968s
 user0m10.952s
 sys 0m0.005s
 
I'm very much surprised that this program was slower than the one
without arrays. Since all lists are produced and consumed with good
producers and consumers (as in section 7.10.3 in the User's Guide)
they should really go away. This seems to be a real bug.

[C program deleted]

  {-# INLINE doFromTo #-}
  -- do the action for [from..to] ,ie it's inclusive.
  doFromTo :: Int - Int - (Int - IO ()) - IO ()
  doFromTo from to action =
let loop n | n  to   = return ()
   | otherwise = do action n
loop (n+1)
 in loop from
 
  test2 :: IO ()
  test2 = do
(arr :: IOUArray Pos Bool) - newArray_ (Pos 0 0, Pos 99 99)
doFromTo 0  $ \_ -
  doFromTo 0 99 $ \y -
doFromTo 0 99 $ \x -
  writeArray arr (Pos y x) False
 
 Timing:
 
 $ ghc --make -O2 SpeedTest.hs -o speedtest-hs
 $ time ./speedtest-hs
 
 real0m7.942s
 user0m7.936s
 sys 0m0.001s
 
 So not much better really. :-(
 
 If there's way to write loops that is faster I'd dearly like to know.
 
 I initially assumed that the second version would run fast since there
 is no obvious need for any memory allocations (which is the usual
 culprit).
 
Oh, there are still reasons for memory allocations. You use an
algebraic data type to index the arrays. That is also going to kill
performance. If we look at the external core we can clearly see that
the indexing with the Pos data type is adding overhead. So to improve
we can write the program in a lower level style like this:

 test3 :: IO ()
 test3 = do
   (arr :: IOUArray Int Bool) - newArray_ (0,100*100-1)
   doFromTo 0  $ \_ -
 doFromTo 0 99 $ \y -
   doFromTo 0 99 $ \x -
 writeArray arr (x*(y+1)) False

This gave me a factor two speedup compared to test2.

Well, if we stare some more at the external core generated from ghc we
will see that there is still a lot of junk in the inner loop. Many of
these things I don't know what they're doing there. I'm sure the
implementors can explain what's going on there.

HTH

/Josef
___
Glasgow-haskell-bugs mailing list
Glasgow-haskell-bugs@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: GHC 6.4 release candidates available

2005-02-18 Thread Josef Svenningsson
Hi,

Compiling 6.4.20050217 on Windows according to the book fails pretty early:
snippet
/cygdrive/c/ghc/ghc-6.2.2/bin//ghc -H16m -O -I. -Rghc-timing  -I../../../librari
es -fglasgow-exts -no-recomp-c Compat/RawSystem.hs -o Compat/RawSystem.o  -o
hi Compat/RawSystem.hi
c:/DOCUME~1/JOSEFS~1/LOKALA~1/Temp/ghc3868.hc: In function `s3SQ_entry':
c:/DOCUME~1/JOSEFS~1/LOKALA~1/Temp/ghc3868.hc:109: too many arguments to functio
n `rawSystem'
/snippet

Cheers,

/Josef
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell] [Newbie] Data structure for Dijkstra's algorithm

2005-02-15 Thread Josef Svenningsson
On Mon, 14 Feb 2005 12:27:51 -0500, robert dockins
[EMAIL PROTECTED] wrote:
 [Dijkstra's] algorithm relies pretty fundamentally on mutability, which makes 
 it
   a less than wonderful fit for a functional language.  If you want to
 use this algorithm in particular, I would recommend a mutable array
 indexed on the vertex pair (u,v).

It is quite possible to implement the shortest path algorithm
functionally even though it is not straight forward. See the following
paper by Ralf Hinze:
http://www.informatik.uni-bonn.de/~ralf/publications/ICFP01.pdf

/Josef
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] What is MonadPlus good for?

2005-02-15 Thread Josef Svenningsson
On Mon, 14 Feb 2005 19:01:53 -0500, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 I was thinking more along the lines of Ralf Hinze's nondeterminism
 transformer monad:
 
 http://haskell.org/hawiki/NonDeterminism
 
 The relevant instance is this:
 
 instance (Monad m) = MonadPlus (NondetT m)
 
 That is, if m is a Monad, then NondetT m is a MonadPlus.  This is not
 true if a requirement for MonadPlus is that it include the mzero is a
 right zero for bind law.  Indeed, such a transformer is impossible to
 write if that law is a requirement.
 
Ah, I see. You are quite right.

  You claimed that monad transformers break the
  mzero-is-right-identity-for-bind law because they can be applied to
  IO. I say, it's not the monad transformers fault. They cannot possibly
  be expected to repair the law if they are given a faulty monad.
 
 IO is not a faulty monad.  It satisfies all of the laws that a monad is
 supposed to satisfy.
 
Sloppy terminology on my side again. What I meant to say is that any
MonadPlus instance of IO is faulty if we insist on the
mzero-is-right-identity-for-bind law. I agree with you that the law
should be dropped.

Cheers,

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What is MonadPlus good for?

2005-02-14 Thread Josef Svenningsson
On Sun, 13 Feb 2005 19:08:26 -0500, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 Quoting Josef Svenningsson [EMAIL PROTECTED]:
 
  I think it's unfair to the monad transformers to simply say that they
  don't obey the law. The interesting thing is whether they *preserve*
  the law. A monad transformer T preserves a law if given a monad M
  which obeys the law holds then the monad T M obeys the law.
 
 The law in question is that mzero is a right-zero for bind.  How can an
 underlying monad be said to obey this law if it doesn't support mzero?
 
You're of course absolutely right that it doesn't make sense to talk
about mzero being a right-identity for bind if the monad doesn't
support mzero. I should have been more clear. Let me have another try
at explaining myself.

Let's consider a specific monad transformer, say (ReaderT r). What I
hope to convince you of is that (ReaderT r) cannot be said break the
mzero-is-right-identity-for-bind law. Now, if we look at the MonadPlus
instance for (ReaderT r) it looks like this:
\begin{code}
instance (MonadPlus m) = MonadPlus (ReaderT r m) where
mzero   = ReaderT $ \_ - mzero
m `mplus` n = ReaderT $ \r - runReaderT m r `mplus` runReaderT n r
\end{code}
This important thing to note here is that the above instance
declaration relies on an underlying monad m with mzero and mplus. If
we try (and indeed succeed) to prove that (ReaderT r m) satisfies the
mzero-is-right-identity-for-bind law we will see that the proof depend
crucially on the fact that m also obeys the law. This is the best that
the monad transformer can do, namely to preserve the law.

You claimed that monad transformers break the
mzero-is-right-identity-for-bind law because they can be applied to
IO. I say, it's not the monad transformers fault. They cannot possibly
be expected to repair the law if they are given a faulty monad.

I hope this makes things a little clearer.

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What is MonadPlus good for?

2005-02-14 Thread Josef Svenningsson
On Mon, 14 Feb 2005 10:07:41 -0500, Jacques Carette [EMAIL PROTECTED] wrote:
 Josef Svenningsson [EMAIL PROTECTED] wrote:
  You claimed that monad transformers break the
  mzero-is-right-identity-for-bind law because they can be applied to
  IO. I say, it's not the monad transformers fault. They cannot possibly
  be expected to repair the law if they are given a faulty monad.
 
 Doesn't that argue for allowing proven and unproven Monads in Haskell?
 
I believe you misunderstand me. The point that I was trying to make
was about monad transformers. I was saying that the best they can do
with respect to a law is to preserve it, which I believe all monad
transformers should do. Otherwise they don't deserve the name.

Turning to monad they should certainly fulfil the laws we associate
with a monad. Otherwise they wouldn't be monads! I am not proposing or
encouraging that one should declare a data type an instance of monad
if they do not obey the monad laws.

Cheers,

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What is MonadPlus good for?

2005-02-13 Thread Josef Svenningsson
On Sun, 13 Feb 2005 17:59:57 -0500, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 G'day all.
 
 Quoting Remi Turk [EMAIL PROTECTED]:
 
  According to http://www.haskell.org/hawiki/MonadPlus (see also
  the recent thread about MonadPlus) a MonadPlus instance
  should obey m  mzero === mzero, which IO doesn't. IOW, the
  MonadPlus instance for IO (defined in Control.Monad.Error)
  probably shouldn't be there.
 
 Clearly the wiki page has not been updated to reflect the current
 debate. :-)
 
 I've changed the wording to this.  Anyone disagree?
 
 Note: There are theoretical reasons why ''mzero'' should be a
 right-zero for (=), but surprisingly few of the existing MonadPlus
 instances actually obey this law.  {{{IO}}} does not, and neither do
 any [MonadTransformer]s, since they may be stacked on top of {{{IO}}}.
 This suggests that either some of the extant MonadPlus instances are
 inappropriate, or that the law itself might be incorrect.  There is
 continuing debate over this, and the dust has not yet settled.
 
I think it's unfair to the monad transformers to simply say that they
don't obey the law. The interesting thing is whether they *preserve*
the law. A monad transformer T preserves a law if given a monad M
which obeys the law holds then the monad T M obeys the law. I haven't
checked if this is the case for any of the monad transformers in the
hierarchical libraries though. But I think that the wording should be
changed so that they aren't blamed for breaking the law.

(I can't believe I'm taking sides with monad transformers as if they
where human. I spend too much time hacking haskell I guess...)

/Josef
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parse text difficulty

2004-12-10 Thread Josef Svenningsson
On Thu, 09 Dec 2004 10:18:12 -0500, Robert Dockins
[EMAIL PROTECTED] wrote:
   And I thought that most programmers used zipWith, which has to be
   prefix.
 
 Is this true?  Can you not use backticks on a partially applied
 function?  If so, it seems like such a thing would be pretty useful
 (although I've never actually had occasion to need it, so)  I'll dig
 out the report and check sometime, but does anyone know for sure that
 the following wouldn't work?
 
 [1..5] `zipWith (+)` [7..]
 
It is possible to emulate this behaviour with some operator trickery. See:
http://www.haskell.org/pipermail/haskell-cafe/2002-July/003215.html

/Josef
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   >