[Haskell-cafe] FAQ: Why no interactive definitions?

2005-12-22 Thread Greg Woodhouse
In neither GHCi nor Hugs (so far as I know) is it possible to
interactively enter definitions. coming from Scheme, this was a bit of
a surprise, as I'm used to being able to enter, say

(define mysquare
(lambda (x)
  (* x x)))

Is this just a matter of the feature not being implemented, or is there
a deeper reason for this?

===
Gregory Woodhouse  [EMAIL PROTECTED]
Interaction is the mind-body problem of computing.
--Philip L. Wadler
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Fwd: Monads and pedagogy (Functions with side-effects?)

2005-12-21 Thread Greg Woodhouse

Oops...meant to send this to the list, too.

--- Greg Woodhouse [EMAIL PROTECTED] wrote:

 Date: Wed, 21 Dec 2005 09:42:44 -0800 (PST)
 From: Greg Woodhouse [EMAIL PROTECTED]
 Subject: Monads and pedagogy (Functions with side-effects?)
 To: David Barton [EMAIL PROTECTED]
 
 --- David Barton [EMAIL PROTECTED] wrote:
 
  Wolfgang Jeltsch writes:
  - Original Message - 
  
   Am Mittwoch, 21. Dezember 2005 13:15 schrieb Creighton Hogg:
   [...]
  
   Monads, I believe, can be just thought of as containers for
 state.
  
   I would say that you are talking especially about the I/O monad
  here.  A 
   monad
   as such is a rather general concept like a group is in algebra.
  
  While this is correct, I'm afraid that for most of us it is a
  flavorless 
  answer.  I wish I had the mathematical mind that made the word
  group in 
  this context instantly intuitively recognizable, but I don't.
 
 Groups are actually relatively intuitive structures, representing (no
 pun intended) the symmetries of some type of object. So, in teaching
 group theory it is easy to immediately provide a number of easy to
 grasp examples: the symmetric group (any kind of rearrangement), the
 cyclic group (cyclic permutation), dihedral group (geometric
 symmetries
 of a square), SL(n, R) (area preserving linear transformations),
 SO(n, R) (transformation preserving angles), Isom(M) (all geomteric
 motions of an object so that in the end it occupies the same space
 without any stretching or other deformation), etc., etc.
 
 Rings are a little more tricky, but R[x] (polynomials over R) is the
 universal example, with homomorphic images like Z[i] just being
 polynomials in i where you reduce according to i^2 = -1, and so
 on.
 Matrices over a ring give you the prototypical example of a module. I
 could go on, but the point is that in abstract algebra it is usually
 easy to provide simple intuitive examples of the structures
 illustrating most of the basic properties of the category. (Okay,
 okay,
 topoi and coherent sheaves may be an exception! But even then,
 analytic
 functions go a long way.)
 
 With monads, it is not hard to give a formal definition, just as is
 the
 case with groups, but much harder (for me, anyway) to find a simple
 intuitive set of examples that capture the essence of monads in such
 a
 way that the definition is obvious. Think about it: If groups are
 just
 symmetries, then doing nothing is clearly a symmetry (1), doing
 nothing before or after a symmetry doesn't change anything.
 Undoing
 a symmetry gives you an inverse. Associativity is less obvious,
 unless
 you think of groups realized as permutations of a set, so that the
 group product just becomes function composition. 
 
 
 
 ===
 Gregory Woodhouse  [EMAIL PROTECTED]
 Interaction is the mind-body problem of computing.
 --Philip L. Wadler
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Functions with side-effects?

2005-12-21 Thread Greg Woodhouse
--- Wolfgang Jeltsch [EMAIL PROTECTED] wrote:

 
  I think Phil Wadler said it best when he said that a monad is a
  *computation*.

To be honest, I'm still struggling with the monad concept myself. Oh
sure, I can read the definition and it makes sense. But I'm still
missing that aha! moment when it all comes together.

 
 No, a value of type IO something is a computation.  The type IO
 plus (=) 
 plus return is a monad.  So is [] plus concatMap plus \x - [x].


This may be totally off-base, but when I read this, it occured to me
that all I/O is basically a computation made to appear as if it is
something your program does.  You (or, rather the processor) don't
execute instructions to write Hello in same way as, say, adding 2 and
2. Rather, you add writing this string to a to do list and wait for a
driver to respond to an interrupt, pick up the request(s), and carry it
(them) out when control passes back the kernel.
 


===
Gregory Woodhouse  [EMAIL PROTECTED]
Interaction is the mind-body problem of computing.
--Philip L. Wadler
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[2]: [Haskell-cafe] Monads in Scala, XSLT, Unix shell pipes was Re: Monads in ...

2005-11-27 Thread Greg Woodhouse
--- Bulat Ziganshin [EMAIL PROTECTED] wrote:

 Hello Greg,
 
 for pure functional computations concurrency is just one of
 IMPLEMENTATION mechanisms, and it doesn't appear in abstractions
 DEFINITIONS 
 

I suppose it depends a bit on the question you're asking. A
multiprocessor, considered as a whole, might be a platform upon which
you wish to implement a functional language. And in a certain sense,
what you do with those processors is an implementation issue. But what
I'm after is compositionality. I have in mind message based physically
distributed systems, where individual components can be thought of as
having well-defined semantics from which the semantics of the system as
a whole can be defined. It's not at all clear (to me, anyway) how to do
this. In a physically distributed system, it seems natural to think of
the other processors, together with the bus(es) or network interfaces
as providing part of the environment, and this leads naturally to the
idea of using a theoretical tool like monads or continuations to model
one of these components -- but that doesn't (obviously, at least)
lead to compositional semantics becsuse of the obvious asymmetry.

By way of background, a project I had been working on (untitl the
project was cancelled) was something I dubbed an interface compiler.
I had developed a number of HL7 interfaces in a traditional imperative
language (HL7, or Health Level 7, is an application protocol used in
healthcare). These interfaces were virtually identical in most
respects, so I set out to build a generic engine that would abstract
away from the details of each interface. I was successful and easily
re-implemented the interfaces I had already written using the new
engine. But a little reflection lead me to conclude that this template
driven approach was really higher order programming in disguise
(another factor leading to my renewed interest in functional
programming). Okay, that's fine as far as it goes, but it suffers from
a severe limitation: the computational model is a single network node
communicvating with its environment. There is no obvious way (in
functional terms, at least) to go from the semantics of the subsystems
running on each node to the semantics of the system as a whole. An idea
that I've considered, but not really attempted to elaborate, is to
generate code for the whole system *as a unit*. In retrospect, I see
that this is essentially an attempt to move to the setting you
describe, in which concurrency is simply a design issue.

I have not yet read Misra's monograph (I hope I got that right -- I'm
visiting family and away from my library), but I'm attracted to the
idea that concurrency should not be a design issue and, by
extension(?), that the process is not fundamental. (After all, is it
not an artifact of the operating system?) This strikes a chord with me,
because computation in functional languages is a matter of reduction,
not sequential execution of statements (commands, really). I've been
attracted to reactive systems because they, too, seem to provide a path
to moving beyond the process abstraction, and because I've been working
on TCP/IP based applications for years, and find it all quite
fascinating. But, in a fundamental sense, reactive systems seem to
represent a step in the *opposite* direction. After all, the
appropriate program logic here seems to be temporal logic -- hardly
natural from a functional perspective!

I should apologize (no longer in advance) for the stream of
consciousness nature of this post. Think of it as an attempt to pull
together a few seemingly (or maybe not so seemingly) unrelated threads
from my earlier posts.


===
Gregory Woodhouse  [EMAIL PROTECTED]


Interaction is the mind-body problem of computing.

--Philip Wadler











___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monads in Scala, XSLT, Unix shell pipes was Re: Monads in ...

2005-11-26 Thread Greg Woodhouse
--- Shae Matijs Erisson [EMAIL PROTECTED] wrote:

 Geoffrey Alan Washburn [EMAIL PROTECTED] writes:
 
 There's also Oleg's
 http://okmij.org/ftp/Computation/monadic-shell.html
 at the level of UNIX programming, all i/o can be regarded monadic.

Interesting. I had been thinking about I/O and operating systems as a
different(?) framework for trying to explain the concept of monads: If
you think about a write statement in an imperative language, it doesn't
actually perform the requested operation, it just asks the OS to
perform the action later. With respect to pipes, remember that files
are really fictions, too, and when you write to a pipe the data is
stored somewhere (most likely in memory) and then read in lazy fashion.
Maybe this is a different topic, but exploring concurrency in Haskell
is definitely on my to do list, but this is really a bit of a puzzle.
One thing I've been thinking lately is that in functional programming
the process is really the wrong abstraction (computation is reduction,
not a sequence of steps performed in temporal order). But what is
concurrency if their are no processes to run concurrently? I've beren
thinking about action systems and non-determinism, but am unsure how
the pieces really fit together.



===
Gregory Woodhouse  [EMAIL PROTECTED]


Interaction is the mind-body problem of computing.

--Philip Wadler











___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] using of data types as kinds

2005-11-22 Thread Greg Woodhouse


--- Simon Peyton-Jones [EMAIL PROTECTED] wrote:

 Read about Omega!  
 

I will.

 Busy doing GADTs and impredicativity at the moment though
 

Impredicativity?


===
Gregory Woodhouse  [EMAIL PROTECTED]


Interaction is the mind-body problem of computing.

--Philip Wadler











___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Records

2005-11-22 Thread Greg Woodhouse


--- Sven Panne [EMAIL PROTECTED] wrote:

 I think this discussion has reached a point where it is of utmost
 importance 
 to re-read Wadler's Law of Language Design, a law so fundamental to
 
 computer science that it can only be compared to quantum dynamics in
 physics:
 
http://www.informatik.uni-kiel.de/~mh/curry/listarchive/0017.html
 
 :-)
 
 Cheers,
S.

To be honest, I haven't followed the entire records thread (at least
not yet), but I don't know that it's fair to say that we've been
focusing entirely (or nearly so) on lexical issues. I'll grant you that
there's an awful lot of that going on, but unless I'm missin something
obvious, support for a record data type isn't even a purely syntactic
issue. If records are to be supported, they need to have semantics, and
it's not obvious to me how this is to be done in a functional language.

That being said, this is a matter of some interest to me, primarily
because I've been thinking about how to go about using Haskell with
(not necessarily relational) databases, and it seems awkward to use a
tuple or heterogenous list in a context where new attributes can be
added to existing data. Now, of course, that's a puzzle in it's own
right: How on earth can you achieve anything like referential
transparency here? 


===
Gregory Woodhouse  [EMAIL PROTECTED]


Interaction is the mind-body problem of computing.

--Philip Wadler











___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Detecting Cycles in Datastructures

2005-11-18 Thread Greg Woodhouse


--- Paul Hudak [EMAIL PROTECTED] wrote:

 This is a very late response to an old thread...
 
 Tom Hawkins wrote:
   In a pure language, is it possible to detect cycles in recursive
   data structures?  For example, is it possible to determine that
   cyclic has a loop? ...
  
   data Expr = Constant Int | Addition Expr Expr
  
   cyclic :: Expr
   cyclic = Addition (Constant 1) cyclic
  
   Or phased differently, is it possible to make Expr an instance
 of
   Eq such that cyclic == cyclic is smart enough to avoid a
 recursive
   decent?
  

I'm a little confused. It's one thing to demonstrate that a (possibly
infinite) digraph contains a loop, but quite another to show that it
does not. Carrying around a copy of the subgraph consisting of nodes
actually visited is workable in a pure language, but there's no
guarantee of termination.

   -Tom
 
 ---
 
 Perhaps it's obvious, but one way to do this is to make the cycle
 *implicit* via some kind of a fixed-point operator, which is a trick
 I
 used recently in a DSL application.  


So, you imagine f to be a function that (non-deterministally) follows
an edge in a digraph. The idea being that G is (possibly infinite)
digraph and F is a subgraph. The function f would
non-deterministically select a vertext of F, follow an edge in G (again
chosen non-deterministically), add the (possibly new) edge to F, and
return the resulting graph. Then, you are asking if f has a fixpoint in
this broader context?

 For example, you could define:
 
   data Expr = Const Int
 | Add Expr Expr
 | Loop  -- not exported
deriving (Eq, Show)
 
 The purpose of Loop is to mark the point where a cycle exists.
 Instead of working with values of type Expr, you work with values of
 type Expr - Expr, or Fix Expr:
 
   type Fix a = a - a
 
 For example:
 
   fe1,fe2 :: Fix Expr
   fe1 e = Add (Const 1) (Const 1)  -- non-recursive
   fe2 e = Add (Const 1) e  -- recursive
 
 You can always get the value of an Expr from a Fix Expr whenever you
 need it, by computing the fixed point:
 
   fix f = x where x = f x

If it can be computed. Maybe I'm off-track here, but it seems to me
that when you introduce laziness into the equation, you lose any
guarantee of there even being a fixpoint, much less one that can be
computed.
 
   e1,e2 :: Expr
   e1 = fix fe1  -- finite
   e2 = fix fe2  -- infinite
 
 Note that e2 is equivalent to your cyclic.  But now we can also
 test
 for equality:
 
   instance Eq (Fix Expr) where
 fe1 == fe2  =  fe1 Loop == fe2 Loop
 
 For example, fe2 == fe2 returns True, whereas e2 == e2
 (i.e. your cyclic == cyclic) diverges.
 
 Of course, note that, although fe1 == fe2 implies that fix fe1 == fix
 fe2, the reverse is not true, since there will always be loops of
 varying degrees of unwinding that are semantically equivalent, but
 not
 syntactically, or structurally, equivalent.  Thus this definition
 of
 equality is only approximate, but still, it's useful.

I'm lost. Are you saying bottom is bottom?
 
 If you want to have multiple loops, or a loop not at the top level,
 then you need to add some kind of a loop constructor to Expr.  I've
 sketched that idea below, and pointed out a couple of other useful
 ideas, such as showing a loop, and mapping a function over a loop
 without unwinding it.
 
 I hope this helps,
 
-Paul
 



===
Gregory Woodhouse  [EMAIL PROTECTED]


Interaction is the mind-body problem of computing.

--Philip Wadler











___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Formalizing lazy lists?

2005-11-18 Thread Greg Woodhouse
Maybe this is old hat, but the question about detecting loops in data
structures got me thinking about this. I know you can encode the cons
operator (and ordinary lists) in pure lambda calculus, but how could
you possibly represent something like [0, 1..]? One thought that
occurss to me is to treat it ass a kind of direct limit, but of course,
the essence of a structure like this is that the ith entry is
computable in a natural way, and it seems that an algebraic limit
couldn't really capture this intuition unless we were to introduce some
sort of higher order structure (which may be what we want, anyway).


===
Gregory Woodhouse  [EMAIL PROTECTED]


Interaction is the mind-body problem of computing.

--Philip Wadler











___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Formalizing lazy lists?

2005-11-18 Thread Greg Woodhouse


--- Lennart Augustsson [EMAIL PROTECTED] wrote:

 What do you mean by represent?
 It's easy enough to write down the lambda term that is the
 encoding of [0..].
 
   -- Lennart
 

You mean like \x - x ? If I apply it to the Church numeral i, I get i
in return.  But that hardly seems satisifying because infinite lists
seem to be a wholly different type of object. 


===
Gregory Woodhouse  [EMAIL PROTECTED]


Interaction is the mind-body problem of computing.

--Philip Wadler











___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Detecting Cycles in Datastructures

2005-11-18 Thread Greg Woodhouse


--- Paul Hudak [EMAIL PROTECTED] wrote:

 
 I suspect from your other post that you haven't seen the standard
 trick of encoding infinite data structures as fixpoints.  Suppose you
 have a lambda calculus term for cons, as well as for the numeral 1.
 Then the infinite list of ones is just:
 

Not in quite those terms. Intuitively, I think of the infinite data
stucture here as being a kind of a completion of the space of
ordinary (finite) graphs to a space in which th given function actually
does have a fixed point.

 Y (\ones. cons 1 ones)
 
 where Y (aka the paradoxical combinator or fixed point
 combinator)
 is defined as:
 
 \f. (\x. f (x x)) (\x. f (x x))
 

Now, this is I have seen, but it frankly strikes me as a formal
trick. I've never felt that this definition of Y gave me much insight
into the computation you describe below.

 To see this, try unfolding the above term a few iterations using
 normal-order reduction.  Note that the term has no recursion in it 
 whatsoever.
 
 Now, my earlier point above is that:
 
 Y (\ones. cons 1 ones)
 
 unwinds to the same term as:
 
 Y (\ones. cons 1 (cons 1 ones))
 
 yet the two terms (\ones. cons 1 ones) and
 (\ones. cons 1 (cons 1 ones)) are not the same (in fact they're not 
 lambda convertible, either).
 
-Paul

And this is what leaves me scatching my head. If you leave off the
ones, then you get a sequence of finite lists [1], [1, 1], ... that
seems to approximate the infinite list, but I find it difficult to
imagine how you might pluck a formula like \i. 1 (the ith term)  out of
the air.
 



===
Gregory Woodhouse  [EMAIL PROTECTED]


Interaction is the mind-body problem of computing.

--Philip Wadler











___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Formalizing lazy lists?

2005-11-18 Thread Greg Woodhouse


--- Lennart Augustsson [EMAIL PROTECTED] wrote:

 How about:
 
 nil = \ n c . n
 cons x xs = \ n c . c x xs
 
 zero = \ z s . z
 suc n = \ z s . s n
 
 listFromZero = Y ( \ from n . cons n (from (suc n))) zero
 
 (Untested, so I might have some mistake.)
 
   -- Lennart
 

Okay, I see what you mean here. What I was thinking is that repeatedly
unfolding Y give us kind of an algorithm to extract initial segments of
the list (like from), but there's something disquieting about all of
this. Maybe it's just my lack of familiarity with lambda calculus, but
I think one thing I got used to as a student was the idea that if I
started peeling back layers from an abstract defintion or theorem, I'd
end up with something that looked familiar. Now, to me a list
comprehension is something like an iterative recipe. Maybe [0, 1, ..]
is something primitive that can only really be understood
recursively, and then a comprehension like

[ x*x | x - [0, 1..] ]

is something easily built from that (or itself defined as a fixed
point), but it would be nice to see the iterative recipe somehow fall
out of the more formal definition in terms of Y.


===
Gregory Woodhouse  [EMAIL PROTECTED]


Interaction is the mind-body problem of computing.

--Philip Wadler











___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Detecting Cycles in Datastructures

2005-11-18 Thread Greg Woodhouse


--- Paul Hudak [EMAIL PROTECTED] wrote:

 
 The important property of Y is this:
 
 Y f = f (Y f)

Right. This is just a formal statement of the property thaat f fixex Y
f. I'm with you so far.
 
 In this way you can see it as unwinding the function, one step at a
 
 time.  If we define f as (\ones. cons 1 ones), then here is one step
 of 
 unwinding:
 
 Y f == f (Y f) == cons 1 (Y f)
 

I'm with you here, too, except that we are *assuming* that Y has the
stated property. If it does, it's absolutely clear that the above
should hold.

 If you do this again you get:
 
 cons 1 (cons 1 (Y f))
 
 and so on.  As you can see, continuing this process yields the
 infinite 
 list of ones.

Right.

 
 Now note that if we define g as (\ones. cons 1 (cons 1 ones)), we
 get:
 
 Y g == g (Y g) == cons 1 (cons 1 (Y g))

Now, this is where things get a little mysterious. Where did g come
from?

I understand that the x x om formal definition of Y is what makes
everything work (or, at least I think I do). When I try and think this
all through, I wind up with something like this:

First of all, I'm after a list (named ones) with the property that
ones = cons 1 ones. How can I express the existence of such a thing as
a fixed point? Well, if it does exist, it must be a fixed point of f =
\x. cons 1 x, because

f ones == (\x. cons 1 x) ones == cons 1 ones ==? ones

Now, if I write

Y = \y. (\x y (x x)) (\x. y x x)

then,

Y f == cons 1 (\x. cons 1 ( x x)) (\x (cons 1) x x)

and I'm left to ponder what that might mean. Formally, I can see that
this pulls an extra cons 1 out to the left, and so we're led to your g

 
 which obviously also yields the infinite list of ones.  Yet f is not 
 equal to g.

To me, all this really seems to be saying is that for any n, there's a
lambda expression explicitly involving [1, 1, ..., 1] (n times) that is
a solution to the above problem, which I interpet (perhaps
incorrectly) to mean that *if* there were a real list that wedre a
solution to the above recurrence, then for any n, there would be an
initial sublist consisting of just 1's.

Now, I know that lambda expressions are not lists, though lists can be
encoded as lambda expressions. For finite lists, this all seem simple
enough, but in this case we seem to have a lambda expression that is
not a (finite) list, but that can nevertheless be approximated in
some3 sense by finte lists.

But I can't really put my finger on just what that sense might be.
 
 Does this help?

Yes, it does. Very much so.
 
-Paul
 
 



===
Gregory Woodhouse  [EMAIL PROTECTED]


Interaction is the mind-body problem of computing.

--Philip Wadler











___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Formalizing lazy lists?

2005-11-18 Thread Greg Woodhouse


--- Lennart Augustsson [EMAIL PROTECTED] wrote:

 
 Unfolding Y is indeed part of the algorithm to generate the list.
 The lambda calculus is just another programming language, so
 why does this disturb you?
 

Well...think about this way. The function

f i = [1, 1 ..]!!i

is just a constant function expressed in a complicated way. Can I
algoritmically determine that f is a constant function? For any
particular value of i, the calculations you give demonstrate that f i =
1. It's a little less obvious that f really is constant. Yes, I know
this can be proved by induction, but verifying a proof is not the same
as discovering the theorem in the first place.

I suppose the big picture is whether I can embed the theory of finite
lists in the theory of infinite lists, preferably in such a way that
the isomorphism of the theory finite lists with the embedded
subtheory is immediately obvious.


===
Gregory Woodhouse  [EMAIL PROTECTED]


Interaction is the mind-body problem of computing.

--Philip Wadler











___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Infinite lists and lambda calculus

2005-11-18 Thread Greg Woodhouse


--- Lennart Augustsson [EMAIL PROTECTED] wrote:

 
 It computes the fix point which you can also define as
   oo
   fix f = lub f^i(_|_)
  i=0
 where f^i is f iterated i times.  Is that a definition
 of fixpoint that makes you happier?

Believe it or not, yes. My background is in mathematics, so something
looking more like a fixed point of a continuous map seems a lot more
intuitive.
 
 
 How much domain theory have you studied?  Maybe that's where you can
 find the connection you're missing?
 

None. But it seems clear that this is a deficit in my education I need
to address.

   -- Lennart
 



===
Gregory Woodhouse  [EMAIL PROTECTED]


Interaction is the mind-body problem of computing.

--Philip Wadler











___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Records (was Re: [Haskell] Improvements to GHC)

2005-11-17 Thread Greg Woodhouse
Isn't there a potential for confusion with function composition (f . g)?

That being said, I like this idea (I just need to think it through a bit).Joel Reymont [EMAIL PROTECTED] wrote:
I second this motion! I rather like Simon's proposal.On Nov 17, 2005, at 5:00 PM, Fraser Wilson wrote: Yeah, I thought you might have tried that at some point :-) I like http://research.microsoft.com/~simonpj/Haskell/records.html cheers, Fraser.






===Gregory Woodhouse [EMAIL PROTECTED]
"Interaction is the mind-body problem of computing."
--Philip Wadler
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Records (was Re: [Haskell] Improvements to GHC)

2005-11-17 Thread Greg Woodhouse


--- Cale Gibbard [EMAIL PROTECTED] wrote:

 Actually, I didn't mention this in the other post, but why not the
 other way around? Make record selection (#) or (!) (though the latter
 gets in the way of array access), and leave (.) for function
 composition. 

Actually, the fact that (!) is the array selector makes it all the more
attractive as a record selector. (It does make you wonder if a record
isn't a kind of a typed associative array, though...)

 Personally, I'd like something which looked like an
 arrow
 for record selection, but most of the good 2-character ones are
 unavailable. (~) is a bit hard to type and looks wrong in some
 fonts.

Well, yeah, but the arrows have such a fundamentally different meaning
in Haskell. (I thought of that one, too).

 There's a triangle which is not taken, and isn't so hard to type
 (|).

If we're not careful, though, Haskell will end up looking like APL.
 
 I never really understood the attachment to (.) for record selection.
 There's no reason that we have to make things look like Java and C.
 
 Another option is to make application of a label to a record mean
 projection, somewhat like things currently are, though since labels
 aren't really functions anymore that is potentially confusing.
 

Actually, I thought of that, too, or rather something like

get label record

or

get record label

(I haven't made up my mind which way the currying makes more sense. Do
you have a generic function for getting records with a certain label,
or do you apply get label, tget the field with this label, to
record?)

  - Cale
 



===
Gregory Woodhouse  [EMAIL PROTECTED]


Interaction is the mind-body problem of computing.

--Philip Wadler











___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe