Re: lines/unlines and inverse

2002-07-22 Thread Lars Henrik Mathiesen

 From: Koen Claessen [EMAIL PROTECTED]
 Date: Mon, 22 Jul 2002 11:25:16 +0200 (MET DST)

 Lars Henrik Mathiesen wrote:

  | lines . unlines = id
  | unlines . lines . unlines == unlines
  | words . unwords . words = words

 Don't be fooled by the information content of the second
 equation -- the first equation directly implies it:

But as Wolfgang noted, the first equation does not actually hold. I
have to admit to getting a brain malfunction just before sending my
post and mis-correcting my original, which was

  lines . unlines . lines = lines

So, while your implication is true, the antecedent isn't, and thus the
second equation does have some information content...

Lars Mathiesen (U of Copenhagen CS Dep) [EMAIL PROTECTED] (Humour NOT marked)
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: gcd 0 0 = 0

2001-12-18 Thread Lars Henrik Mathiesen

 From: Marc van Dongen [EMAIL PROTECTED]
 Date: Tue, 18 Dec 2001 09:32:49 +
 
 Alan Bawden ([EMAIL PROTECTED]) wrote:
 : Indeed, that's a nice way of putting it.  How about if the report just
 : says:
 : 
 :In order to make the non-negative integers into a lattice under `gcd'
 :and `lcm', we define `gcd 0 0 = 0'.

 It would surely make things a lot less accessible to people
 (including me) who do not have any (or limited) knowledge about
 lattices. Why not make it more accessible and use the following rule
 (ore something similar)?

 The greates common divison (gcd) of two integers a and b is the unique
 non-negative integer g which has each of the following two properties:
  1) g divides both a and b; and
  2) if g' also divides both a and b then g' also divides g,
 Here an integer a divides an integer b if there is an integer c
 such that b = c*a.

This is exactly what you get if you plug the relation 'divides' on the
non-negative integers into the definition of meet in a lattice. So
this formulation is no more or less complex to use than the lattice
one --- and people who do know about lattices will probably realize
this pretty fast.

It all depends on who you want to convince that gcd 0 0 should be 0,
the mathematicians (by elegance) or the programmers (by concreteness).
But since it seems that Simon is just going to put 'it is so' in the
report, the point is moot.

Lars Mathiesen (U of Copenhagen CS Dep) [EMAIL PROTECTED] (Humour NOT marked)

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: gcd 0 0 = 0

2001-12-17 Thread Lars Henrik Mathiesen

 From: Marc van Dongen [EMAIL PROTECTED]
 Date: Sun, 16 Dec 2001 13:35:59 +
 
 Marc van Dongen ([EMAIL PROTECTED]) wrote:
 
 :   An integer $a$ divides integer $b$ if there exists an integer
 :   $c$ such that $a c= b$.
 
 To make clear why $0$ (and not any other non-zero integer) is the
 gcd of $0$ and $0$ I should have added that for the integer case
 $g$ is called a greatest common divisor (gcd) of $a$ and $b$ if it
 satifies each of the following two properties:
 
  1) $g$ divides both $a$ and $b$;
  2) if $g'$ is a common divisor of $a$ and $b$ then $g'$ divides $g$.

In case it isn't clear already, these definitions make a lattice on
the positive integers, with divides ~ leq, gcd ~ meet and lcm ~ join,
using the report's definitions of gcd and lcm.

(Choosing the positive result for gcd/lcm is equivalent to noting that
divides is a partial preorder on the non-zero integers, and that the
quotient identifies x and -x).

The only thing that is lacking to make it a lattice on the
non-negative integers, is that  gcd 0 0 = 0 . All other cases
involving zero (i.e.,  gcd 0 x = x  for non-zero x, and  lcm 0 x = 0
for all x) are consistent with 0 being the maximal element in the
lattice, i.e., that all integers divide zero.

Lars Mathiesen (U of Copenhagen CS Dep) [EMAIL PROTECTED] (Humour NOT marked)

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: beginner's questions - fix f

2001-07-24 Thread Lars Henrik Mathiesen

 From: Bob Koutsky [EMAIL PROTECTED]
 Date: Tue, 24 Jul 2001 09:49:33 +0200
 
 [...] suddenly, I hit a wall:
 
 
 Exercise 9.9:
 remainder a b = if a  b then a
  else remainder (a-b) b
 
 fix f = f (fix f)
 
 Rewrite remainder using fix so that it is not recursive.
 
 
 Function fix left me completely puzzled. With help of hugs I found out that 
 its type is ( a - a ) - a, but I have absolutely no idea how it could 
 be used to do anything useful. It seems to me that argument to f on the 
 right side of fix's definition must always evaluate to the same value, no 
 matter how deep the recursion is, so there is no point to use it. I guess I 
 am missing something very obvious, but what is it? Can somebody provide me 
 with an example how to use fix for something just a bit useful, if possible 
 to rewrite remainder?

Well, it is a kind of sleight of hand. The idea is that if you already
had a working remainder function, you could use that to write another,
like this:

 remainder2 a b = if a  b then a
  else remainder (a-b) b

That's not very interesting. But if you made the 'working' function
into an argument of a 'step' function:

 rem_step f a b = if a  b then a else f (a-b) b

you could actually define remainder like this:

 remainder3 = rem_step remainder3

Now this is still a recursive definition, but it makes it easier to
see how the fix function works in a minute. An example evaluation:

 remainder3 3 2 ==
 rem_step remainder3 3 2 ==
 if 3  2 then 3 else remainder3 (3-2) 2 ==
 remainder3 1 2 ==
 rem_step remainder3 1 2 ==
 if 1  2 then 1 else remainder3 (1-2) 2 ==
 1

Now, anything that's defined as x = f x is called a fixpoint of f.
It's possible to prove that there's only one (when f is a Haskell
function, at least) so we can talk of 'the' fixpoint.

Since we can rewrite all recursive definitions in this form, we could
get rid of all explicit recursion if we had a 'fixpoint operator' that
would solve the equation for us. That operator is usually called
'fix', and since we want fix f to be a solution to the equation, we
get the defining equation in your exercise:

 fix f = f (fix f)

This looks like a recursive definition again, but if you want a system
without explicit recursion you have to disallow this as a function. In
such a system, fix is a builtin operator, and the equation can be seen
as a rewrite rule that lets you reason about the evaluation of
expressions that contain it.

However, it turns out that Haskell is quite happy to use the
definition as it stands. Let's redo the example:

 fix f = f (fix f)
 remainder4 = fix rem_step

 remainder4 3 2 ==
 fix rem_step 3 2 ==
 rem_step (fix rem_step) 3 2 ==
 if 3  2 then 3 else (fix rem_step) (3-2) 2 ==
 fix rem_step 1 2 ==
 rem_step (fix rem_step) 1 2 ==
 if 1  2 then 1 else (fix rem_step) (1-2) 2 ==
 1

As you noted, each occurence of fix rem_step above is in a sense the
same, and could evaluate in the same way. But since the expression is
a function (in this case), how it actually behaves depends on its
arguments --- I think that's really the point you were looking for:
because Haskell is lazy, and there's no structure sharing between the
occurences, each occurence is only expanded as often as necessary.

Note that operationally, this Haskell version of the fixpoint operator
is only sort-of equivalent to a Haskell recursive definition (which is
defined in terms of a 'real' fixpoint). That's even clearer with data
structures; compare

 zeroes = 0 :: zeroes

and

 zer_step l = 0 :: l
 zeroes2 = fix zer_step

zeroes is a circular list with one element, while zeroes2 is an
infinite list with all zero elements. It may not be easy to tell the
difference from inside a Haskell program, but the sound of the disk
thrashing will sometimes alert the programmer to it.

Lars Mathiesen (U of Copenhagen CS Dep) [EMAIL PROTECTED] (Humour NOT marked)

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Haskell 98 Report possible errors, part one

2001-07-24 Thread Lars Henrik Mathiesen

 From: Dylan Thurston [EMAIL PROTECTED]
 Date: Mon, 23 Jul 2001 19:57:54 -0400
 
 On Mon, Jul 23, 2001 at 06:30:30AM -0700, Simon Peyton-Jones wrote:
  Someone else, quoted by Simon, attribution elided by Dylan, wrote:
  | 2.2. Identifiers can use small and large Unicode letters. 
  | What about caseless scripts where letters are neither small 
  | nor large? The description of module Char says: For the 
  | purposes of Haskell, any alphabetic character which is not 
  | lower case is treated as upper case (Unicode actually has 
  | three cases: upper, lower and title). This suggests that the 
  | only anomaly is that titlecase letters are considered 
  | uppercase. But what is actually specified is that caseless 
  | scripts can be used to write constructor names, but not to 
  | variable names. I don't know how to solve this.
  
  I am woefully ignorant of Unicode, and I have no idea what to do
  about this one.  I therefore propose to do nothing on the grounds
  that I might easily make matters worse.
 
 In this case, what about requiring identifiers to start with an upper
 or lower case alphabetic character?

I'm not sure that makes things better. It just makes it impossible to
have identifiers in caseless scripts (some of which are alphabetic).

And whether you choose your upper or lower case alphabetic character
from Latin, Greek, Coptic, Cyrillic, Armenian, Georgian, or Deseret,
it will probably look silly in front of a variable name spelled in
Hangul.

What would make sense to me is to define that caseless letters
(Unicode class Lo) behave as lowercase, and to choose some easily
visible, culturally neutral, symbol as the official 'conid marker'.
Since the problem only arises on Unicode-capable systems, there should
be plenty of those to choose from, even outside Latin-1.

To fix Haskell 98, the least intrusive way might be to allow only
classes Ll, Lt, and Lu in identifiers, with Lt (titlecase) and Lu
counting as uppercase --- it looks like that may actually have been
the intention. And then add a note explaining that caseless scripts
can't be used because they weren't considered initially.

Lars Mathiesen (U of Copenhagen CS Dep) [EMAIL PROTECTED] (Humour NOT marked)

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: beginner's questions - fix f

2001-07-24 Thread Lars Henrik Mathiesen

 From: Marcin 'Qrczak' Kowalczyk [EMAIL PROTECTED]
 Date: 24 Jul 2001 13:05:25 GMT
 
 24 Jul 2001 12:04:33 -, Lars Henrik Mathiesen [EMAIL PROTECTED] pisze:
 
  Now, anything that's defined as x = f x is called a fixpoint of f.
  It's possible to prove that there's only one (when f is a Haskell
  function, at least) so we can talk of 'the' fixpoint.
 
 Not necessarily only one, e.g. any value is a fixpoint of identity
 function.
 
 But there is one *least* fixpoint wrt. definedness, and it can be
 effectively found.

My error. I meant to write only one relevant fixpoint. This was a
beginner's question, after all, so I wasn't going to go into too many
details.

 BTW, a better definition than
 fix f = f (fix f)
 is
 fix f = let x = f x in x
 because it increases sharing, avoiding recomputation.

And it very obviously constructs a solution to x = f x and delivers
it. But trying to explain its operational behaviour gets right back to
Haskell's builtin recursion, which wasn't what the original poster was
trying to understand.

 You can define fix without recursion in a typeless lambda calculus:
 
 fix f = (\x - f (x x)) (\x - f (x x))
 
 and this can even be stretched to fit Haskell's type system by smart
 use of algebraic types.

I know. I even posted about it to another thread about two weeks ago.
But that loses sharing again, of course.

Lars Mathiesen (U of Copenhagen CS Dep) [EMAIL PROTECTED] (Humour NOT marked)

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Negatively recursive data types

2001-07-04 Thread Lars Henrik Mathiesen

 From: Keith Wansbrough [EMAIL PROTECTED]
 Date: Wed, 04 Jul 2001 18:27:05 +0100
 
 Hi... I'm currently looking at the semantics of recursive data types.  
 One thing that Haskell allows, but the semantics for it is very hairy, 
 is *negatively* recursive data types.  That is, data types where the 
 recursion occurs to the left of a function arrow.  For example:
 
 This example is not a very good one.  Does anyone have an example of a 
 useful data type involving negative recursion?

There's a trick you can use to port the fixpoint operator from untyped
lambda calculus:

Untyped:
 fix f = (\x - f (x x))  (\x - f (x x))

This doesn't work in Hindley-Milner type systems, of course, since the
lambda-bound x's need to have two different types at the same time. So
we have to help the type checker, like this:

Haskell:
 data Fix a = F (Fix a) - a
 
 fix :: (a - a) - a
 fix f = (\(F x) - f (x (F x))) (F (\(F x) - f (x (F x

At least it worked in Miranda(TM). But I don't know if it counts as
useful.

Lars Mathiesen (U of Copenhagen CS Dep) [EMAIL PROTECTED] (Humour NOT marked)

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: newbie

2001-03-12 Thread Lars Henrik Mathiesen

 Date: Mon, 12 Mar 2001 17:16:29 +0100
 From: Frank Atanassow [EMAIL PROTECTED]

 Lars Henrik Mathiesen wrote (on 10-03-01 20:35 -):
  However, in some expositions of category theory, the usefulness of
  monads is justified because they 'belong' to a certain adjunction.
 
 You can regard a monad as arising from a particular adjunction but,
 although every adjunction determines a unique monad, the converse is
 not true. In fact, the collection of resolutions for a monad forms a
 category with adjunctions as objects and certain functors as arrows.
 The adjunction which gives rise to the Kleisli category is initial
 in this category. The terminal object is called the Eilenberg-Moore
 category and it has as objects M-algebras, like your `xi', and as
 arrows M-algebra homomorphisms.

Yes, I was aware of that --- I should perhaps have said that there's
typically a 'motivating' adjunction, often one involving a forgetful
functor. Which is generally different from the one into the Kleisli
category.

I read the rest of your post with great interest too, though I need to
work at it a bit before I think I understand all of it. MacLane is off
the shelf, and section IV.7 is scheduled to be worked though come the
weekend.

My own thoughts were a bit less ambitious, and I found out that
Haskell (at least hugs -98 +o) will in fact do what I had in mind:

 module Algebra () where

 class Monad m = Algebra m a where xi :: m a - a

 instance (Num a) = Algebra [] a where
  xi = foldl (+) 0

 instance Algebra [] [a] where xi = concat

 unit :: Algebra [] a = a
 unit = xi []

 (#) :: Algebra [] a = a - a - a
 x # y = xi [x, y]

   Prelude :load Algebra.lhs
   Reading file "Algebra.lhs":

   Hugs session for:
   /usr/local/share/hugs/lib/Prelude.hs
   Algebra.lhs
   Algebra unit :: Int
   0
   Algebra unit :: Float
   0.0
   Algebra unit :: [Char]
   ""
   Algebra "foo" # "bar"
   "foobar"
   Algebra (1::Int) # 2 # 3
   6

But perhaps I'm just easily amused.

Lars Mathiesen (U of Copenhagen CS Dep) [EMAIL PROTECTED] (Humour NOT marked)

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Knight Problem

2001-02-01 Thread Lars Henrik Mathiesen

 From: "Christoph M." [EMAIL PROTECTED]
 Date: Fri, 2 Feb 2001 00:15:53 +0100

 Does anybody know how to solve the "Knight Problem" ?

If you mean the Knight's Tour problem, the answer is yes.

I coded up a version as an independent study back in grade 12 (1977)
--- in COMAL on a Norsk Data machine. Even when displaying character
graphics to show each partial solution, it only took a few hours to
get to the first solution.

So the Haskell program for your assignment should probably run in less
than 100 milliseconds to match that.

Lars Mathiesen (U of Copenhagen CS Dep) [EMAIL PROTECTED] (Humour NOT marked)

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: combinator parsers and XSLT

2000-09-28 Thread Lars Henrik Mathiesen

 From: "Manuel M. T. Chakravarty" [EMAIL PROTECTED]
 Date: Fri, 29 Sep 2000 10:17:56 +1100

 I agree that usually the predicates as proposed by you would
 be better.  The problem is that a scanner that wants to use
 the usual finite deterministic automation techniques for
 scanning, needs to be able to compute the overlap between
 different predicates.

If the predicates are functions, computing the overlap as another
function is easy.

Isn't the point really that you need to know when the overlap is
empty, so you don't get a combinatorial explosion of cases?

Lars Mathiesen (U of Copenhagen CS Dep) [EMAIL PROTECTED] (Humour NOT marked)





Re: Overlapping instances?

1999-06-14 Thread Lars Henrik Mathiesen

 Date: Sun, 13 Jun 1999 16:46:57 -0400
 From: Kevin Atkinson [EMAIL PROTECTED]

 Thanks but why is this OK?

Sorry, I misunderstood the question.

   class T f r
 
   instance T a   (a)
   instance T (c a b) (c a (b))

 I mean the comman instance here is T (c a b) (c a (b)).

Well, in a sense these instance declarations aren't overlapping ---
one is a proper subset of the other. But I just realized that I've
never understood how instance declarations without type constants are
supposed to work anyway.

Lars Mathiesen (U of Copenhagen CS Dep) [EMAIL PROTECTED] (Humour NOT marked)





Re: Overlapping instances?

1999-06-13 Thread Lars Henrik Mathiesen

 Date: Sun, 13 Jun 1999 01:51:06 -0400
 From: Kevin Atkinson [EMAIL PROTECTED]

 Could some one explain to me why [this is not OK]:

   class T f r
 
   instance T a   (d a)
   instance T (c a b) (c a (d b))

Because, just as Hugs says:

   *** Common instance : T (a b c) (a b (a b c))

given

data Foo a b = Foo a b

the type

T (Foo Int Char) (Foo Int (Foo Int Char))

will match both instance declarations --- the first one with

a = (Foo Int Char)
d = (Foo Int)

the second one with

c = Foo
a = Int
b = Char
d = (Foo Int)

Lars Mathiesen (U of Copenhagen CS Dep) [EMAIL PROTECTED] (Humour NOT marked)





Re: how to write a simple cat

1999-06-01 Thread Lars Henrik Mathiesen

 Date: Tue, 01 Jun 1999 17:32:22 +0200
 From: Sven Panne [EMAIL PROTECTED]

 Don't fear! Mr. One-Liner comes to the rescue:;-)
 
longerThan fn lenlim = readFile fn = lines .| filter (length .| (lenlim)) .| 
zip [1..] .| map (\(n,l) - shows n ") " ++ l) .| unlines .| putStr

Are you sure he didn't want the _original_ line numbers?

   longerThan fn lenlim = readFile fn = lines .| zip [1..] .| filter (snd .| length 
.| (lenlim)) .| map (\(n,l) - shows n ") " ++ l) .| unlines .| putStr

Lars Mathiesen (U of Copenhagen CS Dep) [EMAIL PROTECTED] (Humour NOT marked)