RE: [Haskell-cafe] Haskell maximum stack depth

2008-02-05 Thread Simon Peyton-Jones

| First bad thing:
| Stack size (memory consumed) doubles each time it overflows.
|
| Second bad thing:
| Arbitrary limit on stack size unrelated to overall (heap) memory
| available.
|
| Third bad thing (the really bad thing):
| If a stack has temporarily grown (to 64M say), it will never shrink
| back down again to something more typical ( 4K say). If I understand
| correctly, it will continue to take 64M from the heap regardless.
|
| What I would like is to be able to set an upper limit on total memory
| useage and allow the program to freely use this memory as either stack
| or heap. At least that should be the default behaviour, but maybe
| also allow +RTS restrictions for debugging (though I don't think this
| is a very good way of investigating a programs stack use).
|
| I would also like stack memory allocation to increase (and decrease :-)
| in some sane sized linear increment, not double each time. With the
| current scheme, as I understand it, if 65M is needed then 128M will be
| allocated.

Would you like to create a ticket for this?  I don't know how many people it 
bites, and how often, but it'd be good to have it recorded.  Changing to a 
linear increment would be relatively straightforward, but would have quadratic 
copying cost as the stack grew big.  Shrinking stacks would not be hard, I 
think, at the cost of perhaps copying them again if they grew big.

| Stefan O'Rear suggested an alternative. I don't know how hard it would
| be to implement though (don't really know anything about ghc rts).
|
|   http://haskell.org/pipermail/glasgow-haskell-users/2007-May/012472.html

Yes, this is the standard solution, and it's a good one because it has a robust 
cost model (no quadratic costs).  However, it's tricky to get right; copying is 
simpler.  If a significant fraction of runtime (for some interesting 
program(s)) turned out to be consumed by copying stacks then we could consider 
this.

Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-02-05 Thread Adrian Hey

Stefan O'Rear wrote:

On Mon, Feb 04, 2008 at 10:13:12PM +, Adrian Hey wrote:

Also
remember that this behaviour never wastes more than 50% of the stack,
which is a relatively small amount.

Only if the stack is relatively small. Would you say the same about
heap, or about a stack that only needed 50% of heap space but ended
up using all of it? Or money? Using twice as much as you need of
anything is bad IMO.


Apparently you don't realize that GHC normally uses twice as much heap
as is needed, due to the decision to use a two-space copying collector
by default for the oldest generation. :)


Yikes again! It gets worse :-)

Perhaps I should have said *live* heap.

Regards
--
Adrian Hey








___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-02-05 Thread Adrian Hey

Simon Peyton-Jones wrote:

| First bad thing:
| Stack size (memory consumed) doubles each time it overflows.
|
| Second bad thing:
| Arbitrary limit on stack size unrelated to overall (heap) memory
| available.
|
| Third bad thing (the really bad thing):
| If a stack has temporarily grown (to 64M say), it will never shrink
| back down again to something more typical ( 4K say). If I understand
| correctly, it will continue to take 64M from the heap regardless.
|
| What I would like is to be able to set an upper limit on total memory
| useage and allow the program to freely use this memory as either stack
| or heap. At least that should be the default behaviour, but maybe
| also allow +RTS restrictions for debugging (though I don't think this
| is a very good way of investigating a programs stack use).
|
| I would also like stack memory allocation to increase (and decrease :-)
| in some sane sized linear increment, not double each time. With the
| current scheme, as I understand it, if 65M is needed then 128M will be
| allocated.

Would you like to create a ticket for this?


OK


I don't know how many people it bites, and how often,


The problem is that the fact that it *might* bite often affects your
whole coding style (well mine actually :-) for some problems. It also
seems to have given rise to the POV that ghc's current behaviour is good
because stack use is bad. MHO is that it's only ghc's current behaviour
that *makes* stack use bad.

I think it bites a lot less often than it otherwise would because most
people will deliberately chose to use heap in preference to stack (at
least when writing eager code) just to avoid the problem. But it still
bites pretty often anyway with lazy code for unexpected reasons.
Arguably such situations are indeed a bug more often than not, but
I still think terminating the program unnecessarily (at 8M stack) is
bad default policy.


Yes, this is the standard solution, and it's a good one because it has a robust 
cost model (no quadratic costs).  However, it's tricky to get right; copying is 
simpler.  If a significant fraction of runtime (for some interesting 
program(s)) turned out to be consumed by copying stacks then we could consider 
this.


Do you really need such evidence? If we agree that allowing stack to
grow to arbitrary (limited only by memory availability) size is
reasonable then surely we already know that there will be some stack
size for which quadratic copying cost is going to get stupid :-)

Of course there other possible more radical solutions that come to
mind, like not using a (C style) stack at all. But I guess we'd
be talking about a complete re-write of the pretty much all the
rts and much of the compiler to do this :-(

Regards
--
Adrian Hey

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Haskell maximum stack depth

2008-02-05 Thread Simon Peyton-Jones

|  Yes, this is the standard solution, and it's a good one because it has a 
robust cost model (no quadratic
| costs).  However, it's tricky to get right; copying is simpler.  If a 
significant fraction of runtime (for some
| interesting program(s)) turned out to be consumed by copying stacks then we 
could consider this.
|
| Do you really need such evidence? If we agree that allowing stack to
| grow to arbitrary (limited only by memory availability) size is
| reasonable then surely we already know that there will be some stack
| size for which quadratic copying cost is going to get stupid :-)

Indeed, in principle.  But there are only so many GHC-HQ cycles.  Fixing stacks 
means not fixing something else, so it matters which issues bite most users.

This isn't a fixed-sum game.  The more people help fix and improve GHC, the 
more we can focus on the tricky bits that only we can do.

Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] bimap 0.2

2008-02-05 Thread Neil Mitchell
Hi

 The main difference is a pretty comprehensive interface shakeup: the
 Either variants have been dropped, all L variants have had the L
 removed from their name, and a few functions have been curried. The
 end result is an interface much closer to that of Data.Map. (This also
 means that 0.2 isn't backwards-compatible.)

Good, this seems much nicer. A few comments:

1) You depend on MTL, but I can't obviously see why. Perhaps the test suite?

2) The code:

{-| Insert a pair of values into the bimap, without checking for
overlapping bindings. If either value is already in the bimap, and
is not bound to the other value, the bimap will become inconsistent.
-}
unsafeInsert :: (Ord a, Ord b)
 = a - b - Bimap a b - Bimap a b
unsafeInsert x y (MkBimap left right) =
MkBimap (M.insert x y left) (M.insert y x right)

To my understanding, this is always safe, since insert will overwrite
any existing element in a map. Hence you don't need to do the delete's
first. In addition, since Bimap is not exported internally, the
invariant will never be violated.

3)
insert x y = delete  x
  deleteR y
  unsafeInsert x y

Why not:

insert x y = unsafeInsert x y . delete x . delete y

Now you don't end up using the arrow combinators, and it becomes more
readable (at least to me). Of course, this function may disappear
entirely if what I wrote in (2) is correct.

I will probably start using the bimap package in my current project,
so you already have at least one user :-)

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] bimap 0.2

2008-02-05 Thread Loup Vaillant
2008/2/5, Neil Mitchell [EMAIL PROTECTED]:
 3)
 insert x y = delete  x
   deleteR y
   unsafeInsert x y

 Why not:

 insert x y = unsafeInsert x y . delete x . delete y

 Now you don't end up using the arrow combinators, and it becomes more
 readable (at least to me). Of course, this function may disappear
 entirely if what I wrote in (2) is correct.

I'd rather prefer the arrow combinator: it let me read the composition
in natural order. Sure,  is more verbose than ., and less
idiomatic, but I tend to think it scale a bit more (in the case of
bigger compositions). Well, maybe just a matter of taste...

Loup
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re[2]: [Haskell-cafe] Haskell maximum stack depth

2008-02-05 Thread Bulat Ziganshin
Hello Matthew,

Monday, February 4, 2008, 11:45:51 PM, you wrote:

 That would be nice. But its only beneficial if there are programs
 which takes large amounts of stack at some point, but then shrink down
 to very little stack and continue for a reasonable amount of time.

 From the 'when I was a lad' department...

 Thinking back to when Java transitioned to a garbage collector that could give
 memory back to the OS, we got some unexpected benefits. Consider a machine

i would be also happy if ghc will return unused *heap* memory back to
OS - it's immediately required for my GUI program where users may open
huge files and then close them. but i personally don't have the same
need for *stack*

-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] bimap 0.2

2008-02-05 Thread Neil Mitchell
Hi

   1) You depend on MTL, but I can't obviously see why. Perhaps the test 
  suite?

 The current implementation of (!) relies on the Monad instance for
 Either exported by Control.Monad.Error. There's no fundamental reason
 for this; it was just easier to code. Perhaps I'll get rid of it in a
 later version, but I haven't bothered yet because I don't think an MTL
 dependency is a big deal.

Yes, an MTL dependency is nothing to worry about at all, and isn't
worth even thinking about removing given its actually used.

 Heck, let me prove it to you -- here's what happens if I define
 (insert = unsafeInsert):

Ah, I see now. You are of course correct.

   3)
   insert x y = delete  x
deleteR y
unsafeInsert x y
 
   Why not:
 
   insert x y = unsafeInsert x y . delete x . delete y
 
   Now you don't end up using the arrow combinators, and it becomes more
   readable (at least to me). Of course, this function may disappear
   entirely if what I wrote in (2) is correct.

 This is a matter of taste, I guess. In this case I felt that the
 backwards order of (.) was a little misleading, and I'm personally
 quite comfortable with using () for forward composition.

Fair enough.

You've obviously thought about the issues in this package a great
deal, which gives me a nice happy feeling when using the package!

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Adding gcc type options with cabal (e.g. -mno-cygwin)

2008-02-05 Thread Duncan Coutts

On Tue, 2008-02-05 at 00:10 -0500, Berlin Brown wrote:

 It looked like it passed the option, but didn't resolve the issue.
 Anyone seen that before?  See error in previous post.

GHC is not a cygwin program. It does not use the cygwin gcc, it always
uses its own gcc anyway (which happens to be a mingwin binary) so
-mno-cygwin should not make any difference to the mingwin gcc.

I'd advice using mingw + msys with ghc under windows rather than cygwin.
Or use neither and just use a console.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] bimap 0.2

2008-02-05 Thread Stuart Cook
On Tue, Feb 5, 2008 at 9:11 PM, Neil Mitchell [EMAIL PROTECTED] wrote:
 Hi


   The main difference is a pretty comprehensive interface shakeup: the
   Either variants have been dropped, all L variants have had the L
   removed from their name, and a few functions have been curried. The
   end result is an interface much closer to that of Data.Map. (This also
   means that 0.2 isn't backwards-compatible.)

  Good, this seems much nicer. A few comments:

  1) You depend on MTL, but I can't obviously see why. Perhaps the test suite?

The current implementation of (!) relies on the Monad instance for
Either exported by Control.Monad.Error. There's no fundamental reason
for this; it was just easier to code. Perhaps I'll get rid of it in a
later version, but I haven't bothered yet because I don't think an MTL
dependency is a big deal.


  2) The code:

  {-| Insert a pair of values into the bimap, without checking for
  overlapping bindings. If either value is already in the bimap, and
  is not bound to the other value, the bimap will become inconsistent.
  -}
  unsafeInsert :: (Ord a, Ord b)
  = a - b - Bimap a b - Bimap a b
  unsafeInsert x y (MkBimap left right) =
 MkBimap (M.insert x y left) (M.insert y x right)

  To my understanding, this is always safe, since insert will overwrite
  any existing element in a map. Hence you don't need to do the delete's
  first. In addition, since Bimap is not exported internally, the
  invariant will never be violated.

Consider the Bimap { (1, alfa) }. If I do (unsafeInsert 1 bravo),
the left-hand map will contain:

  { (1 = bravo) }

and the right-hand map will contain:

 { (alfa = 1), (bravo = 1) }

If I then do (lookupR alfa), I'll incorrectly get (Just 1) instead
of (Nothing).

Heck, let me prove it to you -- here's what happens if I define
(insert = unsafeInsert):

  prop_clobberR : Falsifiable, after 0 tests:
  fromList [(-1,1)]
  0
  prop_valid: Falsifiable, after 12 tests:
  fromList [(1,-5),(5,-5)]

It is true that at least one of the deletes will always be
unnecessary, but since a failed delete does nothing we might as well
leave both of them in.

The export of (valid) is mostly a debugging aid, so that users can
check whether their problems are caused by my code.


  3)
  insert x y = delete  x
   deleteR y
   unsafeInsert x y

  Why not:

  insert x y = unsafeInsert x y . delete x . delete y

  Now you don't end up using the arrow combinators, and it becomes more
  readable (at least to me). Of course, this function may disappear
  entirely if what I wrote in (2) is correct.

This is a matter of taste, I guess. In this case I felt that the
backwards order of (.) was a little misleading, and I'm personally
quite comfortable with using () for forward composition.


  I will probably start using the bimap package in my current project,
  so you already have at least one user :-)

Glad to hear it, and thanks for your feedback.


Stuart
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is Haskore dead? - By no means!

2008-02-05 Thread Henning Thielemann

On Mon, 4 Feb 2008 [EMAIL PROTECTED] wrote:

 On 2008.02.04 16:11:55 -0200, Maurí­cio [EMAIL PROTECTED] scribbled 0.3K 
 characters:
  Hi,
 
  I've just tried using Haskore (I use Ubuntu
  and GHC), with no success. Since Haskore was
  started a long time ago, but it's not yet
  cabalized, and the author's page can not be
  reached, I can't say for sure if it's still
  maintained. Does anybody know?
 
  Thanks,
  Maurício

 I think the homepage http://www.haskell.org/haskore/ is more than a bit 
 old. I googled a bit more, and found 
 http://www.haskell.org/haskellwiki/Haskore which links to a Darcs repo at 
 http://darcs.haskell.org/haskore/. The most recent modification date is for 
 src/, 04-Dec-2007. I'm trying it out right now, but the darcs get is taking a 
 while.

I've setup and maintain the Darcs repository. I've extended and corrected
the original package by Paul Hudak a lot. It's also cabalized, but much of
a building site, with (too) much package dependencies.

Discussion has shifted to:
  http://lists.lurk.org/mailman/listinfo/haskell-art
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: bimap 0.2

2008-02-05 Thread Christian Maeder
Neil Mitchell wrote:
 Yes, an MTL dependency is nothing to worry about at all, and isn't
 worth even thinking about removing given its actually used.

I would appreciate haskell98 portability!

We have a similar module, too, and would switch to a standard (if bimap
becomes it).

We've called it injective maps. Does surjectivity make sense a all?
Our other names are bad, but maybe transpose or inverse is better
than twist (viewing maps as finite functions).

Our delete function takes both values as input, possibly deleting two
entries, but your two delete functions make more sense.

http://www.dfki.de/sks/hets/src-distribution/daily/Hets/docs/Common-InjMap.html
or
http://www.dfki.de/sks/hets/src-distribution/versions/Hets/docs/Common-InjMap.html

Cheers Christian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] fmap vs. liftM

2008-02-05 Thread Henning Thielemann

On Mon, 4 Feb 2008, Miguel Mitrofanov wrote:

  Problem is that from the idea Functor is a superclass of Monad, with
  the
  property that fmap == liftM.

 [cut]

  The second relation can even not be expressed in Haskell 98.

 Erm...

 class Functor f where
  fmap :: (a - b) - f a - f b
 class Functor m = Monad m where
  return :: a - m a
  join :: m (m a) - m a

 bind :: Monad m = m a - (a - m b) - m b
 bind mx f = join $ fmap f mx

nice

 Now liftM must be exactly equal to fmap.

How do you convince the compiler that
  'join (fmap return x) == x' ?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re[2]: [Haskell-cafe] bimap 0.2

2008-02-05 Thread Bulat Ziganshin
Hello Neil,

Tuesday, February 5, 2008, 1:11:47 PM, you wrote:

 insert x y = delete  x
   deleteR y
   unsafeInsert x y

i use the following trick:

(.$) = flip ($)

insert x y it = it.$ delete  x
  .$ deleteR y
  .$ unsafeInsert x y


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] fmap vs. liftM

2008-02-05 Thread Brandon S. Allbery KF8NH


On Feb 5, 2008, at 8:31 , Henning Thielemann wrote:


How do you convince the compiler that
  'join (fmap return x) == x' ?


How do you convince it that the current formulation of Monad obeys  
the monad laws?  (rhetorical)


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Minor issue with mapAccumR

2008-02-05 Thread Henning Thielemann

On Tue, 5 Feb 2008, Cale Gibbard wrote:

 Are many people using mapAccumR? How much would it hurt to change it?

I cannot remember having ever used mapAccumR, but I used mapAccumL where I
used mapM on State monad before, in order to avoid dependency on MTL (and
thus non-Haskell-98 features).
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-02-05 Thread Luke Palmer
On Feb 5, 2008 2:50 AM, Adrian Hey [EMAIL PROTECTED] wrote:
 I think it bites a lot less often than it otherwise would because most
 people will deliberately chose to use heap in preference to stack (at
 least when writing eager code) just to avoid the problem. But it still
 bites pretty often anyway with lazy code for unexpected reasons.
 Arguably such situations are indeed a bug more often than not, but
 I still think terminating the program unnecessarily (at 8M stack) is
 bad default policy.

Please, after all this, do give an example.

Luke
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] fmap vs. liftM

2008-02-05 Thread Henning Thielemann

On Tue, 5 Feb 2008, Brandon S. Allbery KF8NH wrote:

 On Feb 5, 2008, at 8:31 , Henning Thielemann wrote:

  How do you convince the compiler that
'join (fmap return x) == x' ?

 How do you convince it that the current formulation of Monad obeys
 the monad laws?  (rhetorical)

My point was that the constraint 'liftM == fmap' cannot be expressed in
Haskell 98, and the answer by Miguel suggested that it would be possible
by a different design of class Monad. The above was my objection to this
suggestion.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Parameterisations of Monads

2008-02-05 Thread Matthew Sackman
So I was thinking how dull and uninspiring the current definiton of
Monad really is and came up with some more interesting
parameterisations. The only problem with this one is I'm a) not sure if
it still is a Monad and b) very unsure if it's of any use. There's the
possibility that chucking Cont in there or using newtype to simultate
multiple arrows / type lambdas may lead to more interesting instances,
but can anyone think of exciting use cases for this stuff?

Feel free to fill in the instances! It's also not a parameterisation
I've seen before.

Matthew

 class SuperMonad (m1 :: * - * - *) (m2 :: * - *) where
(~)   :: m1 (m2 a) (m1 (m2 b) (m2 b))
(=~)  :: m1 (m2 a) (m1 (m1 a (m2 b)) (m2 b))
returns :: m1 a (m2 a)

 instance (Monad m) = SuperMonad ((-)) m where
(~)   :: m a - m b - m b
(~)   = ()
(=~)  :: m a - (a - m b) - m b
(=~)  = (=)
returns :: a - m a
returns = return

 instance (Monad m) = SuperMonad ((,)) m where
(~)   :: (m a, (m b, m b))
(=~)  :: (m a, ((a, m b), m b))
returns :: (a, m a)

 instance (Monad m) = SuperMonad Either m where
(~)   :: Either (m a) (Either (m a) (m b))
(=~)  :: Either (m a) (Either (Either a (m b)) (m b))
returns :: Either a (m a)

 instance (Monad m) = SuperMonad State m where
(~)   :: State (m a) (State (m a) (m b))
(=~)  :: State (m a) (State (State a (m b)) (m b))
returns :: State a (m a)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Implementing fixed-sized vectors (using datatype algebra?)

2008-02-05 Thread Henning Thielemann

On Fri, 1 Feb 2008, Aaron Denney wrote:

 On 2008-02-01, Bjorn Buckwalter [EMAIL PROTECTED] wrote:
  If Naturals had been sufficient for me I wouldn't have done my own
  implementation (I'm unaware of any other implementation of Integers).
  And there is certainly a lot of value to the clearer error messages
  from a decimal representation.

 I did a balanced-base-three (digits are 0, and +- 1) representation to
 get negative decimals.

Nice. In German the digit values are sometimes called eins, keins, meins. :-)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Signature for non-empty filter

2008-02-05 Thread Dan Licata
[CC'ed to the agda mailing list as well]

On Feb05, Henning Thielemann wrote:
 
 Is Haskell's type system including extensions strong enough for describing
 a function, that does not always return a trivial value? E.g.
(filter (\x - x==1  x==2))
   will always compute an empty list. Using an appropriate signature for
 the function it shall be rejected at compile time, because it is probably
 a mistake - probably (filter (\x - x==1 || x==2) xs) was meant. I assume
 that the type must contain somehow an example input for which the function
 value is non-trivial. If Haskell's type system is not strong enough, what
 about dependently typed languages like Cayenne or Epigram? (I know,
 theorem provers have no problem with such types.)
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
 

You can definitely do this with dependent types.  Here's a way to do it
in Agda 2:

  -- booleans, just like in Haskell:
  data Bool : Set where
True : Bool
False : Bool
  
  _||_ : Bool - Bool - Bool
  False || False = False
  _ || _ = True
  
  __ : Bool - Bool - Bool
  True  True = True
  _  _ = False
  
  not : Bool - Bool 
  not True = False
  not False = True

  -- Now our first use of dependency: we need to turn a boolean value
  -- into the proposition that it's true.  We do this with a type 
  -- Check b where only Check True is inhabited; Check False is not
  data Check : Bool - Set where
OK : Check True
  
  -- a function f is satisfiable if there is some input on which it returns true
  data Sat {A : Set} : (A - Bool) - Set where
Witness : {f : A - Bool} - (a : A) - Check (f a) - Sat f
  
  -- here's an easy one:
  example : Sat (\x - x || not x)
  example = Witness True OK
  
  -- there's no way to prove this one, as each case you'd need
  -- a term of type Check False in the empty context
  example2 : Sat (\x - x  not x)
  example2 = Witness True {! need a term of type Check False !} 
  example2 = Witness False {! need a term of type Check False !} 
  
  -- Now you can use Sat f as a precondition to filter:

  data List (A : Set) : Set where
[]   : List A
_::_ : A - List A - List A  
  
  filter : {A : Set} (f : A - Bool) - Sat f - List A - List A
  filter f sat [] = []
  filter f sat (x :: xs) with f x 
  ...   | True  = x :: (filter f sat xs)
  ...   | False = filter f sat xs

  -- Note that the code doesn't use sat at all, so we might as well have
  -- written:
  stdfilter : {A : Set} - (A - Bool) - List A - List A
  stdfilter f [] = []
  stdfilter f (x :: xs) with f x 
  ...  | True  = x :: (stdfilter f xs)
  ...  | False = stdfilter f xs
  
  fancyfilter : {A : Set} (f : A - Bool) - Sat f - List A - List A
  fancyfilter f _ l = stdfilter f l
 
That is, the Sat f argument is used only to impose the precondition that
you wanted, and it has no bearing on how filter actually computes.  

Of course, the trade-off here is that to call filter you have to cough
up an argument on which the function is satisfiable.  I don't know a way
to get the compiler to construct this proof for you, but maybe someone
who has done more dependent programming that I have knows a trick.

You may be able to mimic this using GADTs, but it likely won't be as
direct, because the 'Check (f a)' argument to Sat talks about the very
code that you're passing to filter.

-Dan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parameterisations of Monads

2008-02-05 Thread Luke Palmer
On Feb 5, 2008 7:48 AM, Matthew Sackman [EMAIL PROTECTED] wrote:
 So I was thinking how dull and uninspiring the current definiton of
 Monad really is and came up with some more interesting
 parameterisations. The only problem with this one is I'm a) not sure if
 it still is a Monad and b) very unsure if it's of any use. There's the
 possibility that chucking Cont in there or using newtype to simultate
 multiple arrows / type lambdas may lead to more interesting instances,
 but can anyone think of exciting use cases for this stuff?

 Feel free to fill in the instances! It's also not a parameterisation
 I've seen before.

I can't!  That's because all the instances except for (-) have free
type variables in a covariant position, so I'd be forced to used
undefined all over the place.  And the State instance just confuses
me... :-)

However, I think most Arrows would work as the first parameter, it's
just not clear they would be useful.

Luke
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Signature for non-empty filter

2008-02-05 Thread Henning Thielemann

Is Haskell's type system including extensions strong enough for describing
a function, that does not always return a trivial value? E.g.
   (filter (\x - x==1  x==2))
  will always compute an empty list. Using an appropriate signature for
the function it shall be rejected at compile time, because it is probably
a mistake - probably (filter (\x - x==1 || x==2) xs) was meant. I assume
that the type must contain somehow an example input for which the function
value is non-trivial. If Haskell's type system is not strong enough, what
about dependently typed languages like Cayenne or Epigram? (I know,
theorem provers have no problem with such types.)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data.Ord and Heaps (Was: Why functional programming matters)

2008-02-05 Thread Roberto Zunino

(Sorry for the late reply.)

[EMAIL PROTECTED] wrote:

I'd really like to write

  class (forall a . Ord p a) = OrdPolicy p where

but I guess that's (currently) not possible.


Actually, it seems that something like this can be achieved, at some price.

First, I change the statement ;-) to

  class (forall a . Ord a = Ord p a) = OrdPolicy p

since I guess this is what you really want.

Then, we reify the Ord class with a GADT:

  data O a where O :: Ord a = O a

Then, we reify the forall, using GADT+impredicativity:

  data O1 p where O1:: (forall a . Ord a = O (p a)) - O1 p

We can express the constraint with a class OrdAll, providing the GADT proof:

  class OrdAll p where
  ordAll :: O1 p

Instances are easy to define, I think:

  instance OrdAll [] where
  ordAll = O1 O

Your class becomes then:

  class OrdAll p = OrdPolicy p where ...

Actually, using this is not exactly nice, since you have to 
'instantiate' the forall on your own. For example,


fooSort :: forall p a . (OrdPolicy p, Ord a) = [p a] - [p a]
fooSort = case ordAll of
  O1 o - case o of
(O :: O (p a)) - sort

* * *

Actually, a simpler (untested) approach could be:

   class OrdAll p where
  ordAll :: Ord a = O (p a)

This would make the O1 hack useless.

Regards,
Zun.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Mutable arrays

2008-02-05 Thread Jeff φ
 I want to say thanks to everyone who responded to my mutable array post.
I'm going to work through and experiment with all the comments people
posted.  It might take me a while.

Luke Palmer wrote:


 Hmm, how big is the array?   If it's pretty big, that's
 understandable.  Frankly, it's because foldl sucks: I have never seen
 a reason to use it.  You should be using the strict variant foldl'
 here.  (I don't think there is a foldl1').  And that will get rid of
 your big function calc_max_2d_elem.



I should have mentioned that I'm working with a 2D array that is 1024 x
1024.  Eventually, this code will have to work with arrays that are much
larger.  (For fun I write image processing and fractal art programs.)  I
replaced the foldl1 with foldl1'.  Unfortunately, I still get a stack
overflow.

Chaddaï Fouché wrote:

 Sorry but none of those propositions change the heart of the problem :
 the list of elements is totally produced before she can be consumed
 due to the strict monadic (IO or ST) nature of getElems. Thus you get
 an extraordinary waste of memory as well as resources...



This is interesting.  I've been programming in Concurrent Clean for a
while.  Instead of monads, Clean supports unique types for mutable arrays
and IO.  In Clean, I can write code that iterates through a mutable array by
converting it to a lazy list.  This is convenient because I can use all the
nice list processing functions that are available.

Changing the subject slightly, I once wrote code in Concurrent Clean that
filtered a file that was larger than the available memory on my PC.  I did
this by creating a function that returned the contents of the original file
as a lazy list.  Then, I created functions to process the list and write the
processed list to a results file.  The code was not imperative at all.  The
function that wrote the results file forced the evaluation of the lazy
list.  As the lazy list was consumed, the contents of the original file were
read.  Is this possible with Monads in Haskell?  Based on your comments, I
suspect that in Haskell, one would have to explicitly code a loop that reads
a portion of the original file, processed it, and writes a portion of the
results file, over and over.

By the way, if anyone wants to see it, I can post some Clean code that
demonstrates the file processing I described.  Clean's syntax is very
similar to Haskell's.

Thanks,

Jeff
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Mutable arrays

2008-02-05 Thread Kalman Noel
Jeff φ wrote:
 Changing the subject slightly, I once wrote code in Concurrent Clean that
 filtered a file that was larger than the available memory on my PC.  I did
 this by creating a function that returned the contents of the original file
 as a lazy list.

Doing this is idiomatic in Haskell, although its usage is commonly
discouraged in more complex UI settings because you cannot ever close
the file handle until the end of the program.  The relevant functions
are to be found in the Prelude (or in Data.ByteString.Lazy, for that
matter).

--
Get a free email account with anti spam protection.
http://www.bluebottle.com/tag/2

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Mutable arrays

2008-02-05 Thread Luke Palmer
On Feb 5, 2008 4:36 PM, Jeff φ [EMAIL PROTECTED] wrote:

 I want to say thanks to everyone who responded to my mutable array post.
 I'm going to work through and experiment with all the comments people
 posted.  It might take me a while.

 Luke Palmer wrote:
  Hmm, how big is the array?   If it's pretty big, that's
  understandable.  Frankly, it's because foldl sucks: I have never seen
  a reason to use it.  You should be using the strict variant foldl'
  here.  (I don't think there is a foldl1').  And that will get rid of
  your big function calc_max_2d_elem.
 

 I should have mentioned that I'm working with a 2D array that is 1024 x
 1024.  Eventually, this code will have to work with arrays that are much
 larger.  (For fun I write image processing and fractal art programs.)  I
 replaced the foldl1 with foldl1'.  Unfortunately, I still get a stack
 overflow.

Right, that was my mistake.  The reason is right here:

 Chaddaï Fouché wrote:

 
  Sorry but none of those propositions change the heart of the problem :
  the list of elements is totally produced before she can be consumed
  due to the strict monadic (IO or ST) nature of getElems. Thus you get
  an extraordinary waste of memory as well as resources...



 This is interesting.  I've been programming in Concurrent Clean for a while.
 Instead of monads, Clean supports unique types for mutable arrays and IO.
 In Clean, I can write code that iterates through a mutable array by
 converting it to a lazy list.  This is convenient because I can use all the
 nice list processing functions that are available.

 Changing the subject slightly, I once wrote code in Concurrent Clean that
 filtered a file that was larger than the available memory on my PC.  I did
 this by creating a function that returned the contents of the original file
 as a lazy list.  Then, I created functions to process the list and write the
 processed list to a results file.  The code was not imperative at all.  The
 function that wrote the results file forced the evaluation of the lazy list.
 As the lazy list was consumed, the contents of the original file were read.
 Is this possible with Monads in Haskell?

Yes, using hGetContents, which is considered bad practice by many
people here.  The problem is that hGetContents breaks referential
transparency, and I suspect that whatever Clean does to lazily read
files also does (though I can't be sure, I haven't looked in any
detail at uniqueness types).  That is, the contents of the returned
list depend on when you read it, which is not allowed in a
referentially transparent language.

The same applies to your problem.  getElems cannot return a lazy list
of elements*, because what if the array were changed between the point
that you did the getElems and the point you required the element.  So
it seems that actually specifying the order of evaluation using an
imperative-style loop is the only pure way to do this.

* Well, it could, but it would require some cleverness like
copy-on-write logic under the hood.

Luke
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Signature for non-empty filter

2008-02-05 Thread Bulat Ziganshin
Hello Henning,

Tuesday, February 5, 2008, 6:01:27 PM, you wrote:

 Is Haskell's type system including extensions strong enough for describing
 a function, that does not always return a trivial value? E.g.
(filter (\x - x==1  x==2))

such things may be detected by (too) smart compiler, but in general
it's undecidable: filter (if LifeHasMeaning then const True else odd) ;)

-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re[2]: [Haskell-cafe] Mutable arrays

2008-02-05 Thread Bulat Ziganshin
Hello Jeff,

Tuesday, February 5, 2008, 7:36:27 PM, you wrote:

 Changing the subject slightly, I once wrote code in Concurrent
 Clean that filtered a file that was larger than the available memory
 on my PC. 
 Is this possible with Monads in Haskell? 

google for simple unix tools

-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Why is this so inefficient?

2008-02-05 Thread Jefferson Heard
I thought this was fairly straightforward, but where the marked line
finishes in 0.31 seconds on my machine, the actual transpose takes
more than 5 minutes.  I know it must be possible to read data in
haskell faster than this.  I'm trying to read a 100MB comma delimited
file.  I've tried both CSV modules, and these take even longer to read
the file.  Is there some general best-practice for reading and parsing
large amounts of data that I'm not aware of?

I have tried, by the way, a couple of things, including putting a bang
(!) before row in transposeRow and using foldr instead of foldl', but
that didn't change anything other than force me to increase the stack
size to 100M on the command line.

I'm running in the profiler now, and I'll update this, but I thought I
would check and see if my head was on remotely straight to begin with.

-- Jeff

---
module ColumnMajorCSV where

import qualified Data.ByteString.Char8 as S
import qualified Data.Map as M
import qualified Data.List as L

transposeRow cols row = zipWith (:) row cols

transposeCSV :: [[S.ByteString]] - M.Map String [S.ByteString]
transposeCSV (header:rows) = M.fromList (zip (map S.unpack header) spreadsheet)
where spreadsheet = L.foldl' transposeRow emptySheet rows
  emptySheet = take (length header) $ repeat []

dataFromFile :: String - IO (M.Map String [S.ByteString])
dataFromFile filename = do
f - S.readFile filename
print . length . map (S.split ',' $!) . S.lines $ f
 -- finishes in 0.31 seconds
return . transposeCSV . map (S.split ',' $!) . S.lines $ f  --
this takes 5 minutes and 6 seconds
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is this so inefficient?

2008-02-05 Thread Don Stewart
If the  strings are relatively short, there can be a bottleneck
in the current Ord instance for Bytestrings. When lots of them
are in a map, the ffi calls to memcmp dominate. 

I've a fix for this (do it all in Haskell for small strings), and
can push it if someone complains some more.

jefferson.r.heard:
 I thought this was fairly straightforward, but where the marked line
 finishes in 0.31 seconds on my machine, the actual transpose takes
 more than 5 minutes.  I know it must be possible to read data in
 haskell faster than this.  I'm trying to read a 100MB comma delimited
 file.  I've tried both CSV modules, and these take even longer to read
 the file.  Is there some general best-practice for reading and parsing
 large amounts of data that I'm not aware of?
 
 I have tried, by the way, a couple of things, including putting a bang
 (!) before row in transposeRow and using foldr instead of foldl', but
 that didn't change anything other than force me to increase the stack
 size to 100M on the command line.
 
 I'm running in the profiler now, and I'll update this, but I thought I
 would check and see if my head was on remotely straight to begin with.
 
 -- Jeff
 
 ---
 module ColumnMajorCSV where
 
 import qualified Data.ByteString.Char8 as S
 import qualified Data.Map as M
 import qualified Data.List as L
 
 transposeRow cols row = zipWith (:) row cols
 
 transposeCSV :: [[S.ByteString]] - M.Map String [S.ByteString]
 transposeCSV (header:rows) = M.fromList (zip (map S.unpack header) 
 spreadsheet)
 where spreadsheet = L.foldl' transposeRow emptySheet rows
   emptySheet = take (length header) $ repeat []
 
 dataFromFile :: String - IO (M.Map String [S.ByteString])
 dataFromFile filename = do
 f - S.readFile filename
 print . length . map (S.split ',' $!) . S.lines $ f
  -- finishes in 0.31 seconds
 return . transposeCSV . map (S.split ',' $!) . S.lines $ f  --
 this takes 5 minutes and 6 seconds
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is this so inefficient?

2008-02-05 Thread Jefferson Heard
I've switched to this, which gets rid of the ByteString instances
fairly quickly.  None of them make it into the final map.  I'm still
not getting any faster response out of it, and it also complains that
my stack size is too small for anything about 128K records or more.

import qualified Data.ByteString.Char8 as S
import qualified Data.Map as M
import qualified Data.List as L

transposeRow cols row = zipWith (:) (map (read . S.unpack) $ row) cols

transposeCSV :: [[S.ByteString]] - M.Map String [Float]
transposeCSV (header:rows) = M.fromList (zip (map S.unpack header) spreadsheet)
where spreadsheet = L.foldl' transposeRow emptySheet rows
  emptySheet = take (length header) $ repeat []

dataFromFile :: String - IO (M.Map String [Float])
dataFromFile filename = do
f - S.readFile filename
return . transposeCSV . map (S.split ',' $!) . S.lines $ f

On Tue, Feb 5, 2008 at 1:15 PM, Don Stewart [EMAIL PROTECTED] wrote:
 If the  strings are relatively short, there can be a bottleneck
  in the current Ord instance for Bytestrings. When lots of them
  are in a map, the ffi calls to memcmp dominate.

  I've a fix for this (do it all in Haskell for small strings), and
  can push it if someone complains some more.

  jefferson.r.heard:


  I thought this was fairly straightforward, but where the marked line
   finishes in 0.31 seconds on my machine, the actual transpose takes
   more than 5 minutes.  I know it must be possible to read data in
   haskell faster than this.  I'm trying to read a 100MB comma delimited
   file.  I've tried both CSV modules, and these take even longer to read
   the file.  Is there some general best-practice for reading and parsing
   large amounts of data that I'm not aware of?
  
   I have tried, by the way, a couple of things, including putting a bang
   (!) before row in transposeRow and using foldr instead of foldl', but
   that didn't change anything other than force me to increase the stack
   size to 100M on the command line.
  
   I'm running in the profiler now, and I'll update this, but I thought I
   would check and see if my head was on remotely straight to begin with.
  
   -- Jeff
  
   ---
   module ColumnMajorCSV where
  
   import qualified Data.ByteString.Char8 as S
   import qualified Data.Map as M
   import qualified Data.List as L
  
   transposeRow cols row = zipWith (:) row cols
  
   transposeCSV :: [[S.ByteString]] - M.Map String [S.ByteString]
   transposeCSV (header:rows) = M.fromList (zip (map S.unpack header) 
 spreadsheet)
   where spreadsheet = L.foldl' transposeRow emptySheet rows
 emptySheet = take (length header) $ repeat []
  
   dataFromFile :: String - IO (M.Map String [S.ByteString])
   dataFromFile filename = do
   f - S.readFile filename
   print . length . map (S.split ',' $!) . S.lines $ f
-- finishes in 0.31 seconds
   return . transposeCSV . map (S.split ',' $!) . S.lines $ f  --
   this takes 5 minutes and 6 seconds
   ___
   Haskell-Cafe mailing list
   Haskell-Cafe@haskell.org
   http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
I try to take things like a crow; war and chaos don't always ruin a
picnic, they just mean you have to be careful what you swallow.

-- Jessica Edwards
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-02-05 Thread Luke Palmer
On Feb 5, 2008 6:50 PM, Adrian Hey [EMAIL PROTECTED] wrote:

 Luke Palmer wrote:
  On Feb 5, 2008 2:50 AM, Adrian Hey [EMAIL PROTECTED] wrote:
  I think it bites a lot less often than it otherwise would because most
  people will deliberately chose to use heap in preference to stack (at
  least when writing eager code) just to avoid the problem. But it still
  bites pretty often anyway with lazy code for unexpected reasons.
  Arguably such situations are indeed a bug more often than not, but
  I still think terminating the program unnecessarily (at 8M stack) is
  bad default policy.
 
  Please, after all this, do give an example.

 If you mean an example of coding style and choice of stack vs. heap,
 I already have..

   http://www.haskell.org/pipermail/haskell-cafe/2008-January/038832.html

Ah, sorry, I must have missed that message.

Luke
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] building the unix package and cabal

2008-02-05 Thread Galchin Vasili
Hi Ian,

 Should gmp-devel be an archive (.a) or a shared object (.so)? If an .a,
don't I have to have an environemmt var to point at it's directory?
I tried your sequence below on unix-2.2.0.0 and still cannot resolve gmp.
I did find one gmp .rpm on one of the RHEL cds but without -devel.

Regards, Vasili


On 2/4/08, Ian Lynagh [EMAIL PROTECTED] wrote:

 On Mon, Feb 04, 2008 at 10:50:13AM -0600, Galchin Vasili wrote:
  more specifically the gmp unsatisfied ref shows up with the
 DynamicLinker.

 This works for me, with GHC 6.8.2 and

 http://hackage.haskell.org/packages/archive/unix/2.2.0.0/unix-2.2.0.0.tar.gz

 $ ghc --make Setup
 $ ./Setup configure --user
 $ ./Setup build

 If it's not working for you then you might need a RHEL package called
 something like gmp-devel. If that doesn't fix it, please send us the
 exact commands you are running and the complete output.

 Note that you wouldn't normally compile the unix package yourself,
 however, as it is one of the packages that comes with GHC.


 Thanks
 Ian


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Implementing fixed-sized vectors (using datatype algebra?)

2008-02-05 Thread Bjorn Buckwalter
On Feb 5, 2008 2:16 PM, Alfonso Acosta [EMAIL PROTECTED] wrote:
 On Feb 5, 2008 4:10 PM, Henning Thielemann
 [EMAIL PROTECTED] wrote:
 
  On Fri, 1 Feb 2008, Aaron Denney wrote:
 
   On 2008-02-01, Bjorn Buckwalter [EMAIL PROTECTED] wrote:
If Naturals had been sufficient for me I wouldn't have done my own
implementation (I'm unaware of any other implementation of Integers).
And there is certainly a lot of value to the clearer error messages
from a decimal representation.
  
   I did a balanced-base-three (digits are 0, and +- 1) representation to
   get negative decimals.
 
  Nice. In German the digit values are sometimes called eins, keins, meins. 
  :-)

 I'm almost done with the decimal library but it would be nice to check
 some Integer implementations for future inclusion. So, Aaron, Björn,
 are your implementations available somewhere?

As noted elsewhere in the thread my implementation is available at:

http://www.buckwalter.se/~bjorn/darcs/dimensional/Numeric/NumType.lhs

It is my intent to extract this (self-contained) module to its own
package and put on hackage. It's been a low priority for me but I'm
rather incentivized by this thread.

Thanks,
Bjorn
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-02-05 Thread Neil Mitchell
Hi

 If you mean an example of coding style and choice of stack vs. heap,
 I already have..

   http://www.haskell.org/pipermail/haskell-cafe/2008-January/038832.html

I'm at a loss as why you want a strict version of take. It's clearly
not for performance, as it performs slower. I'd say both the gobbler
programs have a bug, namely that they are not sufficiently lazy.

As an aside, my version of this function would be:

neilGobbler :: Int - [x] - [x]
neilGobbler n xs = length res `seq` res
where res = take n xs

I have no idea if it takes the heap or the stack, or if it performs
faster or slower. If you still have whatever test you used on the
gobbler, perhaps you could tell us.

 If you mean an example of it biting with lazy code, this is discussed
 so often you'd be spoiled for choice if you search the mailing list
 archives. Here's a good one..

   http://www.haskell.org/pipermail/haskell-cafe/2005-December/013521.html

 This example actually shows the problem twice. In one case it's solvable
 by using foldl' instead of foldl.

Which reduces the memory from O(n) to O(1). Surely thats a good thing,
and the code before had a space leak. Space leak is bad, therefore
telling people about it is good.

I think its sensible to let people set their own stack bound (which is
possible), but that clearly just from taking an informal poll of
respondants to this thread, the current default should indeed be the
default. You seem to be the only person clamouring for an unlimited
stack, and thanks to +RTS, you already have it.

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell maximum stack depth

2008-02-05 Thread Adrian Hey

Bulat Ziganshin wrote:

Hello Matthew,

Monday, February 4, 2008, 11:45:51 PM, you wrote:


That would be nice. But its only beneficial if there are programs
which takes large amounts of stack at some point, but then shrink down
to very little stack and continue for a reasonable amount of time.



From the 'when I was a lad' department...



Thinking back to when Java transitioned to a garbage collector that could give
memory back to the OS, we got some unexpected benefits. Consider a machine


i would be also happy if ghc will return unused *heap* memory back to
OS - it's immediately required for my GUI program where users may open
huge files and then close them. but i personally don't have the same
need for *stack*


How do you know you don't or won't have the same need for stack?

Given that most most real programs are going to pull in library code
written by all sorts of people, don't you want your program to be robust
and memory efficient even if it makes use of libraries whose authors
chose stack gobbling in preference to heap gobbling, or who used a
(currently non-existant AFAIK) CPS based implementation for development?

I just don't get this idea that the current implementation (8M limit
IIRC in the absence of +RTS options) is good. 8M is still a pretty
big stack and (8M - 4K) per thread seems like an awful lot of memory
to waste to me. If we're all so sure that big stacks are a bug then
why bother allowing them to grow at all. Why not just limit them to 4K?

Actually I think the latter option above might be good way to discover
how many bug free Haskell progs there really are out there. Precious
few I suspect :-(

Regards
--
Adrian Hey










___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Implementing fixed-sized vectors (using datatype algebra?)

2008-02-05 Thread Alfonso Acosta
On Feb 5, 2008 4:10 PM, Henning Thielemann
[EMAIL PROTECTED] wrote:

 On Fri, 1 Feb 2008, Aaron Denney wrote:

  On 2008-02-01, Bjorn Buckwalter [EMAIL PROTECTED] wrote:
   If Naturals had been sufficient for me I wouldn't have done my own
   implementation (I'm unaware of any other implementation of Integers).
   And there is certainly a lot of value to the clearer error messages
   from a decimal representation.
 
  I did a balanced-base-three (digits are 0, and +- 1) representation to
  get negative decimals.

 Nice. In German the digit values are sometimes called eins, keins, meins. 
 :-)

I'm almost done with the decimal library but it would be nice to check
some Integer implementations for future inclusion. So, Aaron, Björn,
are your implementations available somewhere?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Implementing fixed-sized vectors (using datatype algebra?)

2008-02-05 Thread Alfonso Acosta
On Feb 5, 2008 8:29 PM, Bjorn Buckwalter [EMAIL PROTECTED] wrote:

 On Feb 5, 2008 2:16 PM, Alfonso Acosta [EMAIL PROTECTED] wrote:
  On Feb 5, 2008 4:10 PM, Henning Thielemann
  [EMAIL PROTECTED] wrote:
  
   On Fri, 1 Feb 2008, Aaron Denney wrote:
  
On 2008-02-01, Bjorn Buckwalter [EMAIL PROTECTED] wrote:
 If Naturals had been sufficient for me I wouldn't have done my own
 implementation (I'm unaware of any other implementation of Integers).
 And there is certainly a lot of value to the clearer error messages
 from a decimal representation.
   
I did a balanced-base-three (digits are 0, and +- 1) representation to
get negative decimals.
  
   Nice. In German the digit values are sometimes called eins, keins, 
   meins. :-)
 
  I'm almost done with the decimal library but it would be nice to check
  some Integer implementations for future inclusion. So, Aaron, Björn,
  are your implementations available somewhere?

 As noted elsewhere in the thread my implementation is available at:

 http://www.buckwalter.se/~bjorn/darcs/dimensional/Numeric/NumType.lhs

Thanks!

 It is my intent to extract this (self-contained) module to its own
 package and put on hackage. It's been a low priority for me but I'm
 rather incentivized by this thread.

Great!

How about joining efforts? As I said I almost have a preliminary
version of the decimal library which I'll realease for reviewing
purpouses soon (It won't include Integer computations though)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] fmap vs. liftM

2008-02-05 Thread Dan Weston

Can you do this with a GHC rule? Something like:

{-# RULES
  join_dot_fmap_return/id  forall x . join (fmap return x) = x
#-}

Dan

Henning Thielemann wrote:

On Tue, 5 Feb 2008, Brandon S. Allbery KF8NH wrote:


On Feb 5, 2008, at 8:31 , Henning Thielemann wrote:


How do you convince the compiler that
  'join (fmap return x) == x' ?

How do you convince it that the current formulation of Monad obeys
the monad laws?  (rhetorical)


My point was that the constraint 'liftM == fmap' cannot be expressed in
Haskell 98, and the answer by Miguel suggested that it would be possible
by a different design of class Monad. The above was my objection to this
suggestion.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parameterisations of Monads

2008-02-05 Thread Dan Weston

Matthew,

Your SuperMonad seems remarkably similar to Gabor Greif's Thrist 
datatype [1,2] reported only six days ago on this list [3].


Can you compare/contrast your class approach with his polymorphic type 
approach? Or have I completely confused the two because of the similar 
kind of their arguments?


data Thrist :: (* - * - *) - * - * - * where
Nil :: Thrist p a a
Cons :: p a b - Thrist p b c - Thrist p a c

data Arrow' :: (* - * - *) - * - * - * where
Arr :: Arrow a = a b c - Arrow' a b c
First :: Arrow a = Arrow' a b c - Arrow' a (b, d) (c, d)


[1] http://heisenbug.blogspot.com/2007/11/trendy-topics.html
[2] 
http://heisenbug.blogspot.com/2008/01/embeddings-part-one-arrow-thrist.html

[3] http://thread.gmane.org/gmane.comp.lang.haskell.cafe/35907/focus=35957

Dan

Matthew Sackman wrote:

So I was thinking how dull and uninspiring the current definiton of
Monad really is and came up with some more interesting
parameterisations. The only problem with this one is I'm a) not sure if
it still is a Monad and b) very unsure if it's of any use. There's the
possibility that chucking Cont in there or using newtype to simultate
multiple arrows / type lambdas may lead to more interesting instances,
but can anyone think of exciting use cases for this stuff?

Feel free to fill in the instances! It's also not a parameterisation
I've seen before.

Matthew


class SuperMonad (m1 :: * - * - *) (m2 :: * - *) where
   (~)   :: m1 (m2 a) (m1 (m2 b) (m2 b))
   (=~)  :: m1 (m2 a) (m1 (m1 a (m2 b)) (m2 b))
   returns :: m1 a (m2 a)

instance (Monad m) = SuperMonad ((-)) m where
   (~)   :: m a - m b - m b
   (~)   = ()
   (=~)  :: m a - (a - m b) - m b
   (=~)  = (=)
   returns :: a - m a
   returns = return

instance (Monad m) = SuperMonad ((,)) m where
   (~)   :: (m a, (m b, m b))
   (=~)  :: (m a, ((a, m b), m b))
   returns :: (a, m a)

instance (Monad m) = SuperMonad Either m where
   (~)   :: Either (m a) (Either (m a) (m b))
   (=~)  :: Either (m a) (Either (Either a (m b)) (m b))
   returns :: Either a (m a)

instance (Monad m) = SuperMonad State m where
   (~)   :: State (m a) (State (m a) (m b))
   (=~)  :: State (m a) (State (State a (m b)) (m b))
   returns :: State a (m a)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why is this so inefficient?

2008-02-05 Thread Bryan O'Sullivan
Jefferson Heard wrote:
 I thought this was fairly straightforward, but where the marked line
 finishes in 0.31 seconds on my machine, the actual transpose takes
 more than 5 minutes.  I know it must be possible to read data in
 haskell faster than this.

I took a look into this, writing a similar, but simpler, program.  Half
of the runtime, and 2/3 of the allocation, is spent in ByteString's
split function.  The remaining portions are spent in transposing the list.

COST CENTRE   %time %alloc  ticks bytes
split  66.7   65.1 56 12013
xpose  31.0   32.8 26  60618031
read1.22.0  1   3640229
lines   1.20.1  1260002

I've attached two programs to demonstrate the problem.  One creates a
sample speadsheet; the other transposes it.

I spent a little time trying to find a faster replacement for
ByteString.split, but no luck before I had to return to other things.

b
import Data.List (foldl', transpose)
import qualified Data.ByteString.Char8 as C
import qualified Data.Map as M
import System.Environment (getArgs)

xpose name = do
sheet - (transpose
   .  {-# SCC split #-} map (C.split ',')
   .  {-# SCC lines #-} C.lines)
   `fmap` {-# SCC read #-}  C.readFile name
let m = foldl' go M.empty sheet
print (M.size m)
  where go m (x:xs) = {-# SCC go #-} M.insert x xs m

main = getArgs = mapM_ xpose
import Data.List
import System.IO
import System.Random

rint = show `fmap` (randomRIO (0,100) :: IO Int)

dump cols rows name = do
  h - openFile name WriteMode
  sequence_ . take rows . repeat $ do
cs - sequence . take cols . repeat $ rint
hPutStrLn h . concat . intersperse , $ cs
  hClose h

main = dump 1000 1 dump.csv
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] fmap vs. liftM

2008-02-05 Thread Felipe Lessa
On Feb 5, 2008 6:06 PM, Dan Weston [EMAIL PROTECTED] wrote:
 Can you do this with a GHC rule? Something like:

 {-# RULES
join_dot_fmap_return/id  forall x . join (fmap return x) = x
 #-}

 Dan

I guess this would make use of the rule (otherwise the transformation
would change the code's semantic) but would not enforce that the rule
itself is valid (which is undecidable).

Cheers,

-- 
Felipe.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re[2]: [Haskell-cafe] Haskell maximum stack depth

2008-02-05 Thread Bulat Ziganshin
Hello Adrian,

Tuesday, February 5, 2008, 10:15:59 PM, you wrote:

 i would be also happy if ghc will return unused *heap* memory back to
 OS - it's immediately required for my GUI program where users may open
 huge files and then close them. but i personally don't have the same
 need for *stack*

 How do you know you don't or won't have the same need for stack?

i run my program with rather large datasets - it's happy with current
8m stack. i had problems with filterM function, though, and replaced
it with my own, tail-recursive but probably less efficient
implementation

i know that real problems for my programs will be solved by adding
heap releasing or, say, x64 support. are you have real problems with
stack management? if not, isn't it better to allow ghc developers to
solve real problems for me and other people? things will never be
ideal

from my own POV the only serious argument may be that current
situation limits usability of some programming techniques (CPS?) which
is able to increase programmer's productivity


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Mutable arrays

2008-02-05 Thread John Lato
At least for file processing, I don't think the lazy solution is as
bad as some people on this list indicate.  My solution was to define a
function processAudioFile :: (Handle, Handle) - (ASig - ASig) - IO
(), similar to interact.  The function reads from the first handle and
writes to the second (the problem domain requires two separate files).
 Contents of the first are read into a lazy bytestring with
hGetContents (from Data.ByteString.Lazy), decoded into an ASig (which
in the current version is actually a tuple of a list and an Int of the
total length, but I'm reworking this into a monad), and processed.
The processed list is then encoded back into a bytestring and written
with hPut.  I then stick the whole thing in a bracket to open and
close the filehandles, and call the bracketed function when I'm ready
to do processing.

I'm pretty happy with this solution, for several reasons:
1. The actual processing code remains purely functional.
2.  I didn't have to write imperative-style looping constructs.
3.  Handles get closed after use (even with exceptions, thanks to bracket).
4.  Because all IO, processing, and writing is encapsulated in one
function, everything happens sequentially as it's supposed to, so I
don't get exceptions about lazy filehandles being closed.
5.  Performance has been good.  Memory usage is lower than expected,
and it's fairly fast (at least when I remember to use a non-profiled
version).  I've tested this approach with wave files into the 100's of
MB so far.  Perhaps not quite as fast as optimized C, but good enough
for me.

I'm not quite sure how to get around the problem of getElems being
strict, though.  I do have one idea, but I don't know how it would
work in practice:

-- let ary_max = foldl1' max $ elems $ unsafeFreeze myArray

If you use a boxed array type (IOArray or STArray) for myArray, and
compiled with GHC, no copying is necessary (you may need to use type
annotations to guarantee this).  Then use the foldl' function to get
array_max, and map it onto the original mutable array.  I think it
would be safe provided that you calculate ary_max before you start to
modify the array, which is true for normalization.

It's worth a try, anyway.
John Lato

  Changing the subject slightly, I once wrote code in Concurrent Clean that
  filtered a file that was larger than the available memory on my PC.  I did
  this by creating a function that returned the contents of the original file
  as a lazy list.  Then, I created functions to process the list and write the
  processed list to a results file.  The code was not imperative at all.  The
  function that wrote the results file forced the evaluation of the lazy list.
  As the lazy list was consumed, the contents of the original file were read.
  Is this possible with Monads in Haskell?

 Yes, using hGetContents, which is considered bad practice by many
 people here.  The problem is that hGetContents breaks referential
 transparency, and I suspect that whatever Clean does to lazily read
 files also does (though I can't be sure, I haven't looked in any
 detail at uniqueness types).  That is, the contents of the returned
 list depend on when you read it, which is not allowed in a
 referentially transparent language.

 The same applies to your problem.  getElems cannot return a lazy list
 of elements*, because what if the array were changed between the point
 that you did the getElems and the point you required the element.  So
 it seems that actually specifying the order of evaluation using an
 imperative-style loop is the only pure way to do this.

 * Well, it could, but it would require some cleverness like
 copy-on-write logic under the hood.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: bimap 0.2

2008-02-05 Thread Stuart Cook
On Tue, Feb 5, 2008 at 11:33 PM, Christian Maeder
[EMAIL PROTECTED] wrote:
 Neil Mitchell wrote:
   Yes, an MTL dependency is nothing to worry about at all, and isn't
   worth even thinking about removing given its actually used.

  I would appreciate haskell98 portability!

My development version has removed the need for
Control.Monad.Exception and Control.Arrow. The only remaining H98
incompatibility I can think of is the use of foldl' in fromList.


  We've called it injective maps. Does surjectivity make sense a all?
  Our other names are bad, but maybe transpose or inverse is better
  than twist (viewing maps as finite functions).

In my mind, surjectivity is the property that each key in the
right-hand map is a value in the left-hand map (and vice-versa). This
is related to the unsafeInsert problem I mentioned earlier.

I'm open to suggesions for alternatives to twist. I think it's cute,
because it suggests transposing something without fundamentally
changing it, but a less fanciful name could be a good idea.


  Our delete function takes both values as input, possibly deleting two
  entries, but your two delete functions make more sense.

Incidentally, someone might find it useful to have a two-argument
delete defined as

  deletePair :: (Ord a, Ord b) = a - b - Bimap a b - Bimap a b
  deletePair x y bi
  | (x, y) `pairMember` bi = delete x bi
  | otherwise = bi

but it's easy enough to define that I probably don't need to provide it myself.


  
 http://www.dfki.de/sks/hets/src-distribution/daily/Hets/docs/Common-InjMap.html
  or
  
 http://www.dfki.de/sks/hets/src-distribution/versions/Hets/docs/Common-InjMap.html

The getAToB and getBToA functions are interesting. I'm thinking of
adding them as toMap and toMapR.



Stuart
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parameterisations of Monads

2008-02-05 Thread Gabor Greif


Am 05.02.2008 um 21:27 schrieb Dan Weston:


Matthew,

Your SuperMonad seems remarkably similar to Gabor Greif's Thrist  
datatype [1,2] reported only six days ago on this list [3].


Can you compare/contrast your class approach with his polymorphic  
type approach? Or have I completely confused the two because of the  
similar kind of their arguments?


data Thrist :: (* - * - *) - * - * - * where
Nil :: Thrist p a a
Cons :: p a b - Thrist p b c - Thrist p a c

data Arrow' :: (* - * - *) - * - * - * where
Arr :: Arrow a = a b c - Arrow' a b c
First :: Arrow a = Arrow' a b c - Arrow' a (b, d) (c, d)



For the record, I have done the monad into thrist embedding now:

http://heisenbug.blogspot.com/2008/02/embeddings-part-two-monad- 
thrist.html


Will start pondering about mfix and restricted monads now.

Cheers,

Gabor

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Mutable arrays

2008-02-05 Thread Stefan O'Rear
On Tue, Feb 05, 2008 at 06:00:38PM -0600, John Lato wrote:
 -- let ary_max = foldl1' max $ elems $ unsafeFreeze myArray
 
 If you use a boxed array type (IOArray or STArray) for myArray, and
 compiled with GHC, no copying is necessary (you may need to use type
 annotations to guarantee this).  Then use the foldl' function to get
 array_max, and map it onto the original mutable array.  I think it
 would be safe provided that you calculate ary_max before you start to
 modify the array, which is true for normalization.

Eek!  unsafeFreeze isn't a type cast, it actually modifies flags in the
heap object which are used by the generational garbage collector.  It's
quite concievable that you could get segfaults by modifying a boxed
array after passing it to unsafeFreeze.

This, I believe, would work:

let ary_max = foldl1' max [ inlinePerformIO (readArray myArray ix)
| ix - range (inlinePerformIO (getBounds myArray)) 
]

But it's equally untested.

Stefan


signature.asc
Description: Digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Mutable arrays

2008-02-05 Thread John Lato
Hmm.  It looks like I forgot a step, and it probably would segfault as
given.  That's what I get for mucking about with unsafe* functions.
How about this?

let frozen_ary = unsafeFreeze myArray
let ary_max = foldl1' max $ elems frozen_ary
in ary_max `seq` map (1/ary_max) $ unsafeThaw frozen_ary

This sequence doesn't modify the array object after freezing, except
to call unsafeThaw on it, and there is no need to access the frozen
array after unsafeThaw.  Even so, piling another unsafe function on to
clean up the mess from the first unsafe function strikes me as an
anti-pattern (even though the docs seem to indicate that unsafeThaw
write unsafeFreeze could work).  Furthermore, I can't say what would
happen to any of the original references to myArray, other than that
it's better not to use them.

I'm still mostly a noob, but assuming it works, your version with
inlinePerformIO looks better to me, even with the caveats of
inlinePerformIO.

John

On Feb 5, 2008 7:26 PM, Stefan O'Rear [EMAIL PROTECTED] wrote:
 On Tue, Feb 05, 2008 at 06:00:38PM -0600, John Lato wrote:
  -- let ary_max = foldl1' max $ elems $ unsafeFreeze myArray
 
  If you use a boxed array type (IOArray or STArray) for myArray, and
  compiled with GHC, no copying is necessary (you may need to use type
  annotations to guarantee this).  Then use the foldl' function to get
  array_max, and map it onto the original mutable array.  I think it
  would be safe provided that you calculate ary_max before you start to
  modify the array, which is true for normalization.

 Eek!  unsafeFreeze isn't a type cast, it actually modifies flags in the
 heap object which are used by the generational garbage collector.  It's
 quite concievable that you could get segfaults by modifying a boxed
 array after passing it to unsafeFreeze.

 This, I believe, would work:

 let ary_max = foldl1' max [ inlinePerformIO (readArray myArray ix)
 | ix - range (inlinePerformIO (getBounds 
 myArray)) ]

 But it's equally untested.

 Stefan

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.6 (GNU/Linux)

 iD8DBQFHqQzTFBz7OZ2P+dIRAujzAJ49RDMKtgzrMZ9TxRyXge0hSFZHgwCdGAXM
 8rQy4Fufodehcj5cxoSOoVM=
 =wHxm
 -END PGP SIGNATURE-


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: bimap 0.2

2008-02-05 Thread Stuart Cook
On Wed, Feb 6, 2008 at 11:43 AM, Stuart Cook [EMAIL PROTECTED] wrote:
  My development version has removed the need for
  Control.Monad.Exception and Control.Arrow. The only remaining H98
  incompatibility I can think of is the use of foldl' in fromList.

Version 0.2.1 features:

  * almost-H98 compatibility (foldl' and hierarchical modules)
  * toMap/toMapR
  * big-O in function comments
  * version info in function comments


Stuart
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Implementing fixed-sized vectors (using datatype algebra?)

2008-02-05 Thread Bjorn Buckwalter
   I'm almost done with the decimal library but it would be nice to check
   some Integer implementations for future inclusion. So, Aaron, Björn,
   are your implementations available somewhere?
 
  As noted elsewhere in the thread my implementation is available at:
 
  http://www.buckwalter.se/~bjorn/darcs/dimensional/Numeric/NumType.lhs

 Thanks!

  It is my intent to extract this (self-contained) module to its own
  package and put on hackage. It's been a low priority for me but I'm
  rather incentivized by this thread.

 Great!

 How about joining efforts? As I said I almost have a preliminary
 version of the decimal library which I'll realease for reviewing
 purpouses soon (It won't include Integer computations though)

Well, could you elaborate a little on joining efforts? The effort I
was planning to invest in my package consists mainly of creating a
.cabal file plus some logistics to get tarballs to where they have to
be.

I understand that you (and Wolfgang) are creating a library of type
level decimals for the purpose of constraining vector (list?) lengths.
After that I haven't been paying attention fully to the thread. Is the
goal to create a general-purpose library for type-level programming
and my module would fit into that grander scheme?

Or did you have something else in mind with joining efforts? E.g. help
reviewing your code or writing new code?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] background question about IO monad

2008-02-05 Thread Uwe Hollerbach
Hello, haskellers, I have a question for you about the IO monad. On
one level, I seem to be getting it, at least I seem to be writing
code that does what I want, but on another level I am almost certainly
not at all clear on some concepts. In various tutorials around the
web, I keep finding this notion that the IO monad is one-way, that
you can put stuff into it, but you can't take it out. (And, damn, some
of the contortions I went through while trying to figure out how to do
exceptions in my little scheme interpreter certainly bear that out! I
was sure beating my head against liftIO et al for a fair while...) But
let me post some code snippets here:

 lispUTCTime [] = doIOAction (getClockTime) toS allErrs
   where toS val = String (calendarTimeToString (toUTCTime val))

 lispUTCTime [IntNumber n] =
   return (String (calendarTimeToString (toUTCTime (TOD n 0

This is a little function I added to the interpreter a couple of days
ago: if you enter (UTCtime) with no arguments, it gets the current
time and formats it as UTC: like so; this came from the first
alternative above:

  lisp (UTCtime)
  Wed Feb  6 03:57:45 UTC 2008

and if you give an argument, you get that interpreted as a number of
seconds since epoch, and that gets formatted as UTC; this is from the
second alternative above:

  lisp (UTCtime 1.203e9)
  Thu Feb 14 14:40:00 UTC 2008

And here's the doIOAction routine: I wrote this, it's not some
system-level routine.

 doIOAction action ctor epred =
   do ret - liftIO (try action)
  case ret of
   Left err - if epred err
  then throwError (Default (show err))
  else return (Bool False)
   Right val - return (ctor val)

OK, with all that as background, on one level I understand why I need
the doIOAction routine in the first version of lispUTCTime: I'm
calling getClockTime, that's an IO action, so I enter the IO monad,
get the time, and return: all is cool. In the second version, all I'm
doing is taking a number and interpreting it as a time, and writing
that in a particular format; again, no problem.

But after that, it sure seems to me as if I've taken data out of the
IO monad... haven't I? Given that the second alternative never entered
doIOAction and that after both are done I have a string of characters,
prettily formatted to indicate a time, that's what it feels like to
this unwashed C programmer.

So, what's going on there? What's the difference between the two
alternatives? I would appreciate any enlightenment you can send my
way!

regards,
Uwe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Parsec3 stream properties

2008-02-05 Thread Philippa Cowderoy
I'm having a little difficulty finding full properties for Parsec3's 
Stream class, largely because I don't want to overspecify it with regard 
to side-effects. Here's the class:

  class Stream s m t | s - t where
uncons :: s - m (Maybe (t,s))

The idea is that: 

* unfoldM uncons gives the [t] corresponding to the stream

* Assuming no relevant side-effects, unconsing the same value twice will 
yield the same result

What's a relevant side-effect? Well, that's up to the stream and the monad 
it's built in - but unconsing once shouldn't affect the result of 
unconsing a second time unless you've found something fiendishly clever 
that I haven't thought of. Using stream mutability to implement a 
programming language from within the parsing monad is not a good idea, 
though no doubt someone'll do it anyway!

-- 
[EMAIL PROTECTED]

Society does not owe people jobs.
Society owes it to itself to find people jobs.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] background question about IO monad

2008-02-05 Thread Luke Palmer
On Feb 5, 2008 9:44 PM, Uwe Hollerbach [EMAIL PROTECTED] wrote:
   lisp (UTCtime)
   Wed Feb  6 03:57:45 UTC 2008
 ---
   lisp (UTCtime 1.203e9)
   Thu Feb 14 14:40:00 UTC 2008
 --
 But after that, it sure seems to me as if I've taken data out of the
 IO monad... haven't I? Given that the second alternative never entered
 doIOAction and that after both are done I have a string of characters,
 prettily formatted to indicate a time, that's what it feels like to
 this unwashed C programmer.

Formatting a time is a completely pure operation.  If you give the
time formatting function
the same timestamp, you always get the same string back.  It is
getting the *current* time
which is in the IO monad, since it returns different results
depending on at what time it is
called.

Luke
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Mutable arrays

2008-02-05 Thread Jeff φ
On Feb 5, 2008 4:58 PM, Chaddaï Fouché [EMAIL PROTECTED] wrote:

 2008/2/5, Jeff φ [EMAIL PROTECTED]:
  This is interesting.  I've been programming in Concurrent Clean for a
 while.
   Instead of monads, Clean supports unique types for mutable arrays and
 IO.
  In Clean, I can write code that iterates through a mutable array by
  converting it to a lazy list.  This is convenient because I can use all
 the
  nice list processing functions that are available.
 

 You could absolutely write something like that in Haskell, but as some
 have pointed out, this is probably _not a Good Idea_, even though it
 works in your particular case, the array could be modified between the
 time you get the lazy list and the time you actually read it... And
 there's no way to insure it's not happening in Haskell, and I strongly
 doubt there is in Concurrent Clean, could you confirm ?

 Concurrent Clean can handle this in a safe way.  Here's a code snippet for
normalize_2D_ary from ArrayTest.icl:

uToList_2D :: *(a u:(b c)) - (.[c],*(a u:(b c))) | Array a (b c)  Array b
c
map_in_place_2d_arr :: (a - a) *(b *(c a)) - *(b *(c a)) | Array b (c a) 
Array c a
normalize_2D_ary :: *(a *(b c)) - *(a *(b c)) | Array a (b c)  Array b c 
/ c  Ord c
normalize_2D_ary ary =
let (lst,ary2) = uToList_2D ary
max_elem = foldl1 max lst
in  map_in_place_2d_arr (\ x - x / max_elem) ary2
uToList_2D takes a unique array, ary, and returns a tuple containing a list
of the array elements and a new array, ary2.  uToList_2D does not modify
ary, but Clean's type system forces any function that accesses a unique
array to thread the array through and return a new array.  Under the hood
the new array actually uses the same memory storage as the original
array.  So, it is effecient.  Threading the array serializes access
insuring the array won't be modified until the list is complete.  The type
system will generate an error if I wrote code that breaks referential
transparency.

Array and IO Monads can be implemented in Clean on top of the uniqueness
type system.  The do-notation could be added to the language.
However, the comments I've read so far lead me to believe the code I've
shown cannot be implemented in Haskell using a lazy list without resorting
to unsafe functions.  I'm beginning to suspect the uniqueness type system of
Clean is more general and flexible than Haskell's monads.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Mutable arrays

2008-02-05 Thread Jeff φ
I forgot to attach the source code for ArrayTest.icl


ArrayTest.icl
Description: Binary data
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Mutable arrays

2008-02-05 Thread Jonathan Cast

On 5 Feb 2008, at 10:14 PM, Jeff φ wrote:




On Feb 5, 2008 4:58 PM, Chaddaï Fouché [EMAIL PROTECTED]  
wrote:

2008/2/5, Jeff φ [EMAIL PROTECTED]:
 This is interesting.  I've been programming in Concurrent Clean  
for a while.
  Instead of monads, Clean supports unique types for mutable  
arrays and IO.

 In Clean, I can write code that iterates through a mutable array by
 converting it to a lazy list.  This is convenient because I can  
use all the

 nice list processing functions that are available.


You could absolutely write something like that in Haskell, but as some
have pointed out, this is probably _not a Good Idea_, even though it
works in your particular case, the array could be modified between the
time you get the lazy list and the time you actually read it... And
there's no way to insure it's not happening in Haskell, and I strongly
doubt there is in Concurrent Clean, could you confirm ?
Concurrent Clean can handle this in a safe way.  Here's a code  
snippet for normalize_2D_ary from ArrayTest.icl:


uToList_2D :: *(a u:(b c)) - (.[c],*(a u:(b c))) | Array a (b c)   
Array b c
map_in_place_2d_arr :: (a - a) *(b *(c a)) - *(b *(c a)) | Array  
b (c a)  Array c a
normalize_2D_ary :: *(a *(b c)) - *(a *(b c)) | Array a (b c)   
Array b c  / c  Ord c

normalize_2D_ary ary =
let (lst,ary2) = uToList_2D ary
max_elem = foldl1 max lst
in  map_in_place_2d_arr (\ x - x / max_elem) ary2
uToList_2D takes a unique array, ary, and returns a tuple  
containing a list of the array elements and a new array, ary2.   
uToList_2D does not modify ary, but Clean's type system forces any  
function that accesses a unique array to thread the array through  
and return a new array.  Under the hood the new array actually  
uses the same memory storage as the original array.  So, it is  
effecient.  Threading the array serializes access insuring the  
array won't be modified until the list is complete.


I'm confused --- does uToList_2D return the head of the list before  
or after it finishes reading the array?  If before, then I don't see  
how the type system prevents me from modifying the array before I  
finish examining the list, as you claim.  If after, then the list  
isn't truly lazy.


The type system will generate an error if I wrote code that breaks  
referential transparency.


Array and IO Monads can be implemented in Clean on top of the  
uniqueness type system.  The do-notation could be added to the  
language.
However, the comments I've read so far lead me to believe the code  
I've shown cannot be implemented in Haskell using a lazy list  
without resorting to unsafe functions.  I'm beginning to suspect  
the uniqueness type system of Clean is more general and flexible  
than Haskell's monads.


You mean this particular monad, of course.

jcc

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] weird ghci behavior with gadts and existentials

2008-02-05 Thread Chris Casinghino
Hi all,

I've been playing a bit with gadts and existentials lately, and I
have an example where I don't quite understand the behavior of
ghc.  The expression in question typechecks sometimes in some
versions of ghc, depending on where you write it, and not in
other versions.  Some other people I've asked report not getting
any errors, even when using apparently one of the same versions
of ghc I checked.

If I create a file which contains:

 data TypeGADT t where
 TyInt :: TypeGADT Int

 data ExpGADT t where
 ExpInt :: Int - ExpGADT Int

 data HiddenTypeExp =  forall t . Hidden (TypeGADT t, ExpGADT t)

 weird = case Hidden (TyInt, ExpInt 3) of Hidden (TyInt, e) - e

I am able to :load it into ghci just fine (with -fglasgow-exts)
with version 6.8.2.  However, if I then copy the line:

let weird2 = case Hidden (TyInt, ExpInt 3) of Hidden (TyInt, e) - e

into ghci, I get a type error.  In the HEAD version 6.9, I get a
type error on the definition of weird right away when I :load
the file.  The type error goes away if I add the line

 weird :: ExpGADT Int

before the definition of weird.

The type error in question is this:

interactive:1:46:
Inferred type is less polymorphic than expected
  Quantified type variable `t' escapes
When checking an existential match that binds
e :: ExpGADT t
The pattern(s) have type(s): HiddenTypeExp
The body has type: ExpGADT t
In a case alternative: Hidden (TyInt, e) - e
In the expression:
case Hidden (TyInt, ExpInt 3) of Hidden (TyInt, e) - e

So, several questions.

1) Why the discrepancy in behavior between :loading the file and
copying in the definition in 6.8.2.  It seems like, one way or the
other, this should be consistent.

2) Several other people report not getting any errors at all, even
people with ghc 6.8.2, one of the versions I tested.  What's the
right behavior?  My guess would be that this should cause no
type error, even without the type annotation.  The GADT pattern
match against TyInt in the case statement should refine the
existential type variable t to Int, so no existential type
variables are escaping.  Is that right?

3) Is there a bug here?  Are there two bugs (one, the typing error,
two, the difference between ghc and ghci)?  Or, do I just not
understand what is going on?

Sorry for the length of this email!

--Chris Casinghino
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe