Re: [Haskell-cafe] Basic question concerning the category Hask (was: concerning data constructors)

2008-01-06 Thread Yitzchak Gale
I wrote:
 What goes wrong with finite coproducts? The obvious thing to
 do would be to take the disjoint union of the sets representing the
 types, identifying the copies of _|_.

Jonathan Cast wrote:
 This isn't a coproduct.  If we have f x = 1 and g y = 2, then there
 should exist a function h such that h . Left = f and h . Right = g...
 But by your rule, Left undefined = Right undefined...
 Which is a contradiction...

OK, thanks.

 Unless, of course,
 your require your functions to be strict --- then both f and g above
 become illegal, and repairing them removes the problem.

You don't have to make them illegal - just not part
of your notion of coproduct. That is an entirely
category-theoretic concept, since Empty is bi-universal,
and a morphism is strict iff the diagram

   f
A --- B
 \ ^
  v/
 Empty

commutes.

However, the coproduct you get is the one I suggested,
namely Either !A !B, not the one we usually use.

  What is the lifted version you are referring to?

 Take the ordinary disjoint union, and then add a new _|_ element,
 distinct from both existing copies of _|_ (which are still distinct
 from each other).

 
   Of course, Haskell makes things even worse by lifting the
  product and exponential objects,
 
  OK, what goes wrong there, and what is the lifting?

 Again, in Haskell, (_|_, _|_) /= _|_.  The difference is in the function

 f (x, y) = 1

 which gives f undefined = undefined but f (undefined, undefined) =
 1.  Unfortunately, this means that (alpha, beta) has an extra _|_
 element  (_|_, _|_), so it's not the categorical product (which
 would lack such an element for the same reason as above).
 This is partly an implementation issue --- compiling pattern matching
 without introducing such a lifting requires a parallel implementation
 --- and partly a semantic issue --- data introduces a single level of
 lifting, every time it is used, and every constructor is completely
 lazy.

 Functions have the same issue --- in the presence of seq, undefined /
 = const undefined.

 Extensionality is a key part of the definition of all of these
 constructions.  The categorical rules are designed to require, in
 concrete categories, that the range of the two injections into a
 coproduct form a partition of the coproduct, the surjective pairing
 law (fst x, snd x) = x holds, and the eta reduction law (\ x - f x)
 = f holds.  Haskell flaunts all three; while some categories have few
 enough morphisms to get away with this (at least some times), Hask is
 not one of them.

 jcc


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question concerning the category Hask (was: concerning data constructors)

2008-01-06 Thread Derek Elkins
On Wed, 2008-01-02 at 15:49 +0200, Yitzchak Gale wrote:

[...]

 Some people are worried that this version of Hask is missing
 certain nice properties that one would like to have. For
 example, it was recently claimed on this list that tuples
 are not products in that category. (Or some such. I would be
 interested to see a demonstration of that.)

Johnathan has given such a demonstration (and it has been demonstrated
many times on this list since it's creation, it's well-known).

 I am not impressed by those complaints. As usual in category
 theory, you define corresponding notions in Hask, and prove
 that they are preserved under the appropriate functors.
 That should always be easy. And if ever it is not, then you
 have discovered an interesting non-trivial consequence of
 laziness that deserves study.

You are right not to be impressed by such complaints, but you
misrepresent people's views on this by saying that the worry about
such problems.

As you say (people say), these properties [that Hask is cartesian closed
to start] would be nice to have and are very convenient to assume which
is often safe enough.  Certainly computer scientists of a categorical
bent have developed (weaker) notions to use; namely, monoidal,
pre-monoidal, Freyd, and/or kappa categories and no doubt others.  Using
these, however, removes some of the allure of using a categorical
approach.  Also, there is a Haskell-specific problem at the very get-go.
The most obvious choice for the categorical composition operator
assuming the obvious choice for the arrows and objects does not work,
it does not satisfy the laws of a category assuming the = used in them
is observational equality.  Namely, id . f /= f /= f . id for all
functions f, in particular, it fails when f = undefined.  This can
easily be fixed by making the categorical (.) strict in both arguments
and there is no formal problem with it being different from Haskell's
(.), but it certainly is not intuitively appealing.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question concerning the category Hask (was: concerning data constructors)

2008-01-06 Thread Yitzchak Gale
(sorry, I hit the send button)

 What is the lifted version you are referring to?

 Take the ordinary disjoint union, and then add a new _|_ element,
 distinct from both existing copies of _|_ (which are still distinct
 from each other).

Now why is that not the category-theoretic coproduct?
h . Left = f and h . Right = g both for _|_ and for finite
elements of the types. And it looks universal to me.

  Of course, Haskell makes things even worse by lifting the
 product and exponential objects,

 OK, what goes wrong there, and what is the lifting?

 Again, in Haskell, (_|_, _|_) /= _|_.  The difference is in the function
 f (x, y) = 1 which gives f undefined = undefined but
 f (undefined, undefined) = 1.

I don't get that one. Given any f and g, we define h x = (f x, g x).
Why do we not have fst . h = f and snd . h = g, both in Hask
and StrictHask? fst and snd are strict.

  Unfortunately, this means that (alpha, beta) has an extra _|_
 element  (_|_, _|_), so it's not the categorical product (which
 would lack such an element for the same reason as above).

The reason you can't adjoin an extra element to (A,B) in, say,
Set, is that you would have nowhere to map it under fst
and snd. But here that is not a problem, _|_ goes to _|_
under both.

 This is partly an implementation issue --- compiling pattern matching
 without introducing such a lifting requires a parallel implementation

That's interesting. So given a future platform where parallelizing
is much cheaper than it is today, we could conceivably have
a totally lazy version of Haskell. I wonder what it would be like
to program in that environment, what new problems would arise
and what new techniques would solve them. Sounds like a nice
research topic. Who is working on it?

 --- and partly a semantic issue --- data introduces a single level of
 lifting, every time it is used, and every constructor is completely
 lazy.

Unless you use bangs. So both options are available, and that
essentially is what defines Haskell as being a non-strict language.

 Functions have the same issue --- in the presence of seq, undefined /
 = const undefined.

Right. I am becoming increasingly convinced that the seq issue
is a red herring.

 Extensionality is a key part of the definition of all of these
 constructions.  The categorical rules are designed to require, in
 concrete categories, that the range of the two injections into a
 coproduct form a partition of the coproduct, the surjective pairing
 law (fst x, snd x) = x holds, and the eta reduction law (\ x - f x)
 = f holds.  Haskell flaunts all three; while some categories have few
 enough morphisms to get away with this (at least some times), Hask is
 not one of them.

That interpretation is not something that is essential in the notion
of category, only in certain specific examples of categories
that you know. Product and coproduct in any given category -
whether they exist, what they are like if they exist, and what alternative
constructions exist if they do not - reflect the nature of the structure
that the category is modeling.

I am interested in understanding the category Hask that represents
what Haskell really is like as a programming language. Not
under some semantic mapping that includes non-computable
functions, and that forgets some of the interesting structure
that comes from laziness (though that is undoubtably also
very interesting).

-Yitz
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question concerning the category Hask (was: concerning data constructors)

2008-01-06 Thread Yitzchak Gale
I wrote:
 ...it was recently claimed on this list that tuples
 are not products in that category.

Derek Elkins wrote:
 Johnathan has given such a demonstration (and it has been demonstrated
 many times on this list since it's creation, it's well-known).

We're still working on it. I've not been convinced yet.
Sorry about my thickness.

Perhaps this should be on a nice wiki page somewhere.
I tried to convince David House to put it in the Wikibook
chapter, but he is right, that needs to simpler. There is
some discussion on the talk page for that
chapter, but no one spells out the details there, either.

 You are right not to be impressed by such complaints, but you
 misrepresent people's views on this by saying that they worry about
 such problems.

Sorry, I hope I am not misrepresenting anyone. I just
notice that people make assumptions about a so-called
category Hask, derive various conclusions, then mention
that they are not really true. Perhaps it is only me who is
worried.

 As you say (people say), these properties [that Hask is cartesian closed
 to start] would be nice to have and are very convenient to assume which
 is often safe enough.

I'd like to understand better what is true, so that I can understand
what is safe.

  Certainly computer scientists of a categorical
 bent have developed (weaker) notions to use; namely, monoidal,
 pre-monoidal, Freyd, and/or kappa categories and no doubt others.  Using
 these, however, removes some of the allure of using a categorical
 approach.

It would be nice to distill out of that the basics that are needed
to get the properties that we need for day-to-day work in
Haskell.

  Also, there is a Haskell-specific problem at the very get-go.
 The most obvious choice for the categorical composition operator
 assuming the obvious choice for the arrows and objects does not work...
 ...This can
 easily be fixed by making the categorical (.) strict in both arguments
 and there is no formal problem with it being different from Haskell's
 (.), but it certainly is not intuitively appealing.

I'm not sure it's so bad. First of all, not only is it not a formal
problem, it's also not really a practical problem - there will rarely
if ever be any difference between the two when applied in real
programs.

My opinion is that there would not be any problem with using
that version as (.) in the Prelude. Even if we never do, at least
we should then use (.!) when we state the so-called Monad laws.
It bothers me that Haskell's so-called monads really aren't.
That is bound to cause problems.

And it would be so easy to fix in most cases - just require monad
bind to be strict on the second parameter.

Thanks,
Yitz
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question concerning the category Hask (was: concerning data constructors)

2008-01-06 Thread Miguel Mitrofanov

Take the ordinary disjoint union, and then add a new _|_ element,
distinct from both existing copies of _|_ (which are still distinct
from each other).


Now why is that not the category-theoretic coproduct?
h . Left = f and h . Right = g both for _|_ and for finite
elements of the types. And it looks universal to me.


Yeah, but there could be more functions from Either X Y to Z than  
pairs of functions from X to Z and from Y to Z.


For example, if z :: Z, then you have two functions h1 and h2 such  
that h1 . Left = const z and h1 . Right = const z and the same holds  
for h2. Namely,


h1 = const z

h2 = either (const z) (const z)

This functions are different : h1 (_|_) = z while h2 (_|_) = (_|_).  
And if Either X Y was a category-theoretic coproduct, then the  
function from Either X Y to Z would be UNIQUELY determined by it's  
restrictions to X and Y.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question concerning the category Hask (was: concerning data constructors)

2008-01-06 Thread Jonathan Cast

On 6 Jan 2008, at 3:55 AM, Yitzchak Gale wrote:


I wrote:

What goes wrong with finite coproducts? The obvious thing to
do would be to take the disjoint union of the sets representing the
types, identifying the copies of _|_.


Jonathan Cast wrote:

This isn't a coproduct.  If we have f x = 1 and g y = 2, then there
should exist a function h such that h . Left = f and h . Right = g...
But by your rule, Left undefined = Right undefined...
Which is a contradiction...


OK, thanks.


Unless, of course,
your require your functions to be strict --- then both f and g above
become illegal, and repairing them removes the problem.


You don't have to make them illegal - just not part
of your notion of coproduct.


We're talking past each other --- what is the distinction you are  
making?


jcc

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question concerning the category Hask (was: concerning data constructors)

2008-01-06 Thread Jonathan Cast

On 6 Jan 2008, at 5:32 AM, Yitzchak Gale wrote:


(sorry, I hit the send button)


What is the lifted version you are referring to?



Take the ordinary disjoint union, and then add a new _|_ element,
distinct from both existing copies of _|_ (which are still distinct
from each other).


Now why is that not the category-theoretic coproduct?
h . Left = f and h . Right = g both for _|_ and for finite
elements of the types. And it looks universal to me.


Not quite.  The only requirement for h _|_ is that it be = f x for  
all x and = g y for all y.  If f x = 1 = g y for all x, y, then both  
h _|_ = _|_ and h _|_ = 1 are arrows of the category.  So the  
universal property still fails.



 Of course, Haskell makes things even worse by lifting the
product and exponential objects,



OK, what goes wrong there, and what is the lifting?


Again, in Haskell, (_|_, _|_) /= _|_.  The difference is in the  
function

f (x, y) = 1 which gives f undefined = undefined but
f (undefined, undefined) = 1.


I don't get that one. Given any f and g, we define h x = (f x, g x).
Why do we not have fst . h = f and snd . h = g, both in Hask
and StrictHask? fst and snd are strict.


Again, if f and g are both strict, we have a choice for h _|_ ---  
either h _|_ = _|_ or h _|_ = (_|_, _|_) will work (fst . h = f and  
snd . h = g), but again these are different morphisms.





 Unfortunately, this means that (alpha, beta) has an extra _|_
element  (_|_, _|_), so it's not the categorical product (which
would lack such an element for the same reason as above).


The reason you can't adjoin an extra element to (A,B) in, say,
Set, is that you would have nowhere to map it under fst
and snd. But here that is not a problem, _|_ goes to _|_
under both.

This is partly an implementation issue --- compiling pattern  
matching
without introducing such a lifting requires a parallel  
implementation


That's interesting. So given a future platform where parallelizing
is much cheaper than it is today, we could conceivably have
a totally lazy version of Haskell. I wonder what it would be like
to program in that environment, what new problems would arise
and what new techniques would solve them. Sounds like a nice
research topic. Who is working on it?

--- and partly a semantic issue --- data introduces a single  
level of

lifting, every time it is used, and every constructor is completely
lazy.


Unless you use bangs. So both options are available, and that
essentially is what defines Haskell as being a non-strict language.


(!alpha, !beta) isn't a categorical product, either.  snd (undefined,  
1) = undefined with this type.




Functions have the same issue --- in the presence of seq,  
undefined /

= const undefined.


Right. I am becoming increasingly convinced that the seq issue
is a red herring.


Care to give an explanation?




Extensionality is a key part of the definition of all of these
constructions.  The categorical rules are designed to require, in
concrete categories, that the range of the two injections into a
coproduct form a partition of the coproduct, the surjective pairing
law (fst x, snd x) = x holds, and the eta reduction law (\ x - f x)
= f holds.  Haskell flaunts all three; while some categories have  
few
enough morphisms to get away with this (at least some times),  
Hask is

not one of them.


That interpretation is not something that is essential in the notion
of category, only in certain specific examples of categories
that you know.


I understand category theory.  I also know that the definitions used  
are chosen to get Set `right', which means extensionality in that  
case, and are then extended uniformly across all categories.  This  
has the effect of requiring similar constructions to those in Set in  
other concrete categories.



Product and coproduct in any given category -
whether they exist, what they are like if they exist, and what  
alternative
constructions exist if they do not - reflect the nature of the  
structure

that the category is modeling.


I understand that.  I'm not sure you do.



I am interested in understanding the category Hask that represents
what Haskell really is like as a programming language.


Good luck.


Not
under some semantic mapping that includes non-computable
functions, and that forgets some of the interesting structure
that comes from laziness (though that is undoubtably also
very interesting).


Bear in mind in your quest, that at the end of it you'll most likely  
conclude, like everyone else, that good old equational reasoning is  
sound for the programs you actually right at least 90% of the time  
(with appropriate induction principles), and complete for at least  
90% of what you want to right, and go back to using it exclusively  
for real programming.


jcc


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question concerning the category Hask (was: concerning data constructors)

2008-01-05 Thread Yitzchak Gale
Jonathan Cast wrote:
 The normal view taken by Haskellers is that the denotations of
 Haskell types are CPPOs.
  So:
 (1) Must be monotone
 (2) Must be continuous
 (Needn't be strict, even though that messes up the resulting
 category substantially).

I wrote:
 I'm not convinced that the category is all that messed up.

 Well, no coproducts (Haskell uses a lifted version of the coproduct
 from CPO).

What goes wrong with finite coproducts? The obvious thing to
do would be to take the disjoint union of the sets representing the
types, identifying the copies of _|_.

What is the lifted version you are referring to?

  Of course, Haskell makes things even worse by lifting the
 product and exponential objects,

OK, what goes wrong there, and what is the lifting?

Thanks,
Yitz
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question concerning the category Hask (was: concerning data constructors)

2008-01-05 Thread Jonathan Cast

On 5 Jan 2008, at 6:03 PM, Yitzchak Gale wrote:


Jonathan Cast wrote:

The normal view taken by Haskellers is that the denotations of
Haskell types are CPPOs.
 So:
(1) Must be monotone
(2) Must be continuous
(Needn't be strict, even though that messes up the resulting
category substantially).


I wrote:

I'm not convinced that the category is all that messed up.



Well, no coproducts (Haskell uses a lifted version of the coproduct
from CPO).


What goes wrong with finite coproducts? The obvious thing to
do would be to take the disjoint union of the sets representing the
types, identifying the copies of _|_.


This isn't a coproduct.  If we have f x = 1 and g y = 2, then there  
should exist a function h such that h . Left = f and h . Right = g,  
i.e.,


  h (Left x)
= f x
= 1

and
  h (Right y)
= g y
= 2

But by your rule, Left undefined = Right undefined, so
  1
= h (Left undefined)
= h (Right undefined)
= 2

Which is a contradiction.  Identifying Left _|_ and Right _|_  
produces a pointed CPO, but it's not a coproduct.  Unless, of course,  
your require your functions to be strict --- then both f and g above  
become illegal, and repairing them removes the problem.



What is the lifted version you are referring to?


Take the ordinary disjoint union, and then add a new _|_ element,  
distinct from both existing copies of _|_ (which are still distinct  
from each other).





 Of course, Haskell makes things even worse by lifting the
product and exponential objects,


OK, what goes wrong there, and what is the lifting?


Again, in Haskell, (_|_, _|_) /= _|_.  The difference is in the function

f (x, y) = 1

which gives f undefined = undefined but f (undefined, undefined) =  
1.  Unfortunately, this means that (alpha, beta) has an extra _|_  
element  (_|_, _|_), so it's not the categorical product (which  
would lack such an element for the same reason as above).
This is partly an implementation issue --- compiling pattern matching  
without introducing such a lifting requires a parallel implementation  
--- and partly a semantic issue --- data introduces a single level of  
lifting, every time it is used, and every constructor is completely  
lazy.


Functions have the same issue --- in the presence of seq, undefined / 
= const undefined.


Extensionality is a key part of the definition of all of these  
constructions.  The categorical rules are designed to require, in  
concrete categories, that the range of the two injections into a  
coproduct form a partition of the coproduct, the surjective pairing  
law (fst x, snd x) = x holds, and the eta reduction law (\ x - f x)  
= f holds.  Haskell flaunts all three; while some categories have few  
enough morphisms to get away with this (at least some times), Hask is  
not one of them.


jcc

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question concerning the category Hask (was: concerning data constructors)

2008-01-03 Thread Yitzchak Gale
Hi Jonathan,

I wrote:
 So in what way are Set morphisms restricted from being
 Hask morphisms?

Jonathan Cast wrote:
 The normal view taken by Haskellers is that the denotations of
 Haskell types are CPPOs.

CPPO?

  So:

 (1) Must be monotone
 (2) Must be continuous

Could you please define what you mean by those terms
in this context?

 (Needn't be strict, even though that messes up the resulting category
 substantially).

I'm not convinced that the category is all that messed up.

Thanks,
Yitz
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question concerning data constructors

2008-01-03 Thread Yitzchak Gale
Hi Benja,

I wrote:
 By the type expression Integer - Integer
 we mean all Haskell functions mapping Integers to Integers.
 There are only countably many of those.
 ...
 But that was not the context in this thread. The category
 Hask that we often mention in discussions about Haskell
 the programming language is most certainly a small category.

Benja Fallenstein wrote:
 I don't know. My understanding has been that at least part of the
 benefit of denotational semantics is that you can define what an
 expression means without referring back to the syntactic construction
 or the operational semantics of that expression -- and thus use the
 denotational semantics to check whether the operational semantics are
 right. But if you start with all Haskell functions, you already
 have to know what a Haskell function *is*.

Denotational semantics maps expressions in a
language - hence, syntax - into something that
represents their semantics. You can choose
different such mappings to represent different
semantics of the same expressions.

The Haskell Report clearly defines what a Haskell
function is in terms of syntax. So my semantics are
well-defined, and they represent what I understand
when I read a Haskell program.

In fact, these semantics do not really depend on
all aspects of the syntax - only the existence of
certain primitive functions, and certain constructions
such as function definition, pattern matching,
ADTs, etc. The same assumptions are made for any
semantics of Haskell.

Benja Fallenstein wrote:
 I think it should be allowed to think of the semantics of Haskell as
 being defined independently of the (relatively operational) notion of
 computable function, and then define computable function to be
 that subset of the functions in the model that you can actually write
 in Haskell.

Computable function is not operational - it just means
functions that are lambdas, if you'd like. It just so happens
that, so far, those are the only functions we know how
to compute operationally. Maybe that quantum stuff...

But yes, sure you can do that. That seems to be the approach
in some papers. In particular, the one by Reynolds in which he proves
that Haskell types cannot be represented by sets.

Sounds like strong evidence that those are the wrong
semantics to choose when studying Haskell as a programming
language. Though it is certainly interesting to do so
in a theoretical setting.

 And the only explicit non-syntactic constructions of
 models for Haskell-like languages that I'm familiar with aren't
 countable (they contain all continuous functions, which in the case of
 (Integer - Integer) comes out to all monotonous functions).

It is not any less syntactic than mine. It only differs in
the semantics assigned to the symbol Integer - Integer.

 So I disagree that the function types of Hask should automatically be
 taken to be countable.

No, I agree with you. It's not automatic. It depends on
your choice of semantics.

 If you want to assume it for some given
 purpose, sure, fine, but IMO that's an additional assumption, not
 something inherent in the Haskell language.

Agreed.

Thanks,
Yitz
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question concerning the category Hask (was: concerning data constructors)

2008-01-03 Thread Yitzchak Gale
Jonathan Cast wrote:
 The normal view taken by Haskellers is that the denotations of
 Haskell types are CPPOs.

I wrote:
 CPPO?

 (1) Must be monotone
 (2) Must be continuous

 Could you please define what you mean by those terms
 in this context?

Jens Blanck wrote:
 The extra P would stand for pointed (has a least element, bottom), this is
 common in some communities. To me though, a cpo (complete partial order) is
 closed under directed suprema and the empty set is directed so bottom is
 already required. The category of cpos in not cartesian closed. For
 denotational semantics I believe the subcategory of Scott domains are what
 is usually considered.

 Continuous functions on cpos are by definition monotone and they respect
 directed suprema.

Thanks!
Yitz
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question concerning the category Hask (was: concerning data constructors)

2008-01-03 Thread Jonathan Cast

On 3 Jan 2008, at 3:40 AM, Jens Blanck wrote:


 The normal view taken by Haskellers is that the denotations of
 Haskell types are CPPOs.

CPPO?

  So:

 (1) Must be monotone
 (2) Must be continuous

Could you please define what you mean by those terms
in this context?

 (Needn't be strict, even though that messes up the resulting  
category

 substantially).

I'm not convinced that the category is all that messed up.


Well, no coproducts (Haskell uses a lifted version of the coproduct  
from CPO).  Of course, Haskell makes things even worse by lifting the  
product and exponential objects, as well, which come to think of it  
is unnecessary even in the category of CPPOs and not necessarily  
strict continuous functions.


The extra P would stand for pointed (has a least element, bottom),  
this is common in some communities. To me though, a cpo (complete  
partial order) is closed under directed suprema and the empty set is  
directed so bottom is already required.


Not so.  A set is directed iff every finite subset has an upper bound  
in the set; {} is finite, so it must have an upper bound in the set.   
So directed sets must be non-empty.  (So CPOs needn't be pointed).


jcc

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question concerning data constructors

2008-01-02 Thread ajb

G'day all.

Quoting Yitzchak Gale [EMAIL PROTECTED]:


Data types consist only of computable elements. Since there
are only countably many computable functions, every data type
has at most countably many elements. In particular, it is a set.


I still say it isn't a set in the same way that a group isn't a set.
Haskell data types have structure that is respected by Haskell
homomorphisms.  Sets don't.

Cheers,
Andrew Bromage
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question concerning data constructors

2008-01-02 Thread Benja Fallenstein
Hi Yitz,

On Jan 2, 2008 10:34 AM, Yitzchak Gale [EMAIL PROTECTED] wrote:
 No, only countably many. By the type expression Integer - Integer
 we mean all Haskell functions mapping Integers to Integers.
 There are only countably many of those.
...
 But that was not the context in this thread. The category
 Hask that we often mention in discussions about Haskell
 the programming language is most certainly a small category.

I don't know. My understanding has been that at least part of the
benefit of denotational semantics is that you can define what an
expression means without referring back to the syntactic construction
or the operational semantics of that expression -- and thus use the
denotational semantics to check whether the operational semantics are
right. But if you start with all Haskell functions, you already
have to know what a Haskell function *is*.

I think it should be allowed to think of the semantics of Haskell as
being defined independently of the (relatively operational) notion of
computable function, and then define computable function to be
that subset of the functions in the model that you can actually write
in Haskell. And the only explicit non-syntactic constructions of
models for Haskell-like languages that I'm familiar with aren't
countable (they contain all continuous functions, which in the case of
(Integer - Integer) comes out to all monotonous functions).

So I disagree that the function types of Hask should automatically be
taken to be countable. If you want to assume it for some given
purpose, sure, fine, but IMO that's an additional assumption, not
something inherent in the Haskell language.

Best,
- Benja
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question concerning the category Hask (was: concerning data constructors)

2008-01-02 Thread Jonathan Cast

On 2 Jan 2008, at 5:49 AM, Yitzchak Gale wrote:


Hi Andrew,

Andrew Bromage wrote:
I still say it isn't a set in the same way that a group isn't a  
set.

Haskell data types have structure that is respected by Haskell
homomorphisms.  Sets don't.


Ah, that's certainly true. But what is that additional structure?

In categories that have a forgetful functor to Set, the interesting
part of their structure comes from the fact that their
morphisms are only a proper subset of the morphisms
in Set.

So in what way are Set morphisms restricted from being
Hask morphisms?


The normal view taken by Haskellers is that the denotations of  
Haskell types are CPPOs.  So:


(1) Must be monotone
(2) Must be continuous

(Needn't be strict, even though that messes up the resulting category  
substantially).


jcc

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question concerning data constructors

2008-01-01 Thread Benja Fallenstein
On Dec 31, 2007 7:17 AM,  [EMAIL PROTECTED] wrote:
 This declaration states that there's a bijection between the elements of
 Foo and the elements of 2^Foo, which by Cantor's diagonal theorem cannot
 be true for any set.  That's because we only allow computable functions,

Nit the nit: Or (more commonly, I think) all continuous functions.

 and Foo - Bool is actually an exponential object in the category Hask.

- Benja
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question concerning data constructors

2008-01-01 Thread Yitzchak Gale
Andrew Bromage wrote:
 [*] Theoretical nit: It's not technically a set.

 Consider the data type:

  data Foo = Foo (Foo - Bool)

 This declaration states that there's a bijection between the elements of
 Foo and the elements of 2^Foo, which by Cantor's diagonal theorem cannot
 be true for any set.  That's because we only allow computable functions,
 and Foo - Bool is actually an exponential object in the category Hask.

Data types consist only of computable elements. Since there
are only countably many computable functions, every data type
has at most countably many elements. In particular, it is a set.

The least fixed point under these restrictions is not a bijection
between Foo and 2^Foo. It is only a bijection between Foo and
the subset of computable 2^Foo.

-Yitz
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question concerning data constructors

2008-01-01 Thread Yitzchak Gale
Andrew Bromage wrote:
 [*] Theoretical nit: It's not technically a set.

 Consider the data type:

  data Foo = Foo (Foo - Bool)

 This declaration states that there's a bijection between the elements of
 Foo and the elements of 2^Foo, which by Cantor's diagonal theorem cannot
 be true for any set.  That's because we only allow computable functions,
 and Foo - Bool is actually an exponential object in the category Hask.

I wrote:
 Data types consist only of computable elements. Since there
 are only countably many computable functions,

What I meant by that is that there are only countably
many lambdas, and we can define a computable value
as a lambda.

The classical definition of general recursive function
refers to functions in Integer - Integer to begin
with, so there can only be countably many values by
construction.

 every data type
 has at most countably many elements. In particular, it is a set.

 The least fixed point under these restrictions is not a bijection
 between Foo and 2^Foo. It is only a bijection between Foo and
 the subset of computable 2^Foo.

-Yitz
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question concerning data constructors

2008-01-01 Thread Luke Palmer
On Jan 1, 2008 3:43 PM, Yitzchak Gale [EMAIL PROTECTED] wrote:
 The classical definition of general recursive function
 refers to functions in Integer - Integer to begin
 with, so there can only be countably many values by
 construction.

Except that there are uncountably many (2^Aleph_0) functions in
Integer - Integer.  That doesn't change the fact that there are
countably many computable functions, as you guys have been saying.
But I think you need to refer to the LC or Turing machine definition
to get countability.

Luke
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question concerning data constructors

2007-12-30 Thread Bulat Ziganshin
Hello Joost,

Sunday, December 30, 2007, 5:24:59 PM, you wrote:

 data ClockTime = TOD Integer Integer

it declares type with name ClockTime (which you may use on type
signatures, other type declarations and so on) with one constructor
TOD accepting two Integer values. the only way to construct value of
this type is to apply TOD to two Integer expressions (believe it or
not but this declaration automatically defines TOD as function with
the following signature:

TOD :: Integer - Integer - ClockTime

f.e.:

seconds2clockTime :: Double - ClockTime
seconds2clockTime s = TOD (floor(s)) (round(s*1e12)

the only way to deconstruct values of this type is to use TOD
constructor in parser matching, f.e.:

clockTime2seconds :: ClockTime - Double
clockTime2seconds (TOD s p)  =  fromInteger(s) + fromInteger(p)/1e12




-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question concerning data constructors

2007-12-30 Thread Jake McArthur

On Dec 30, 2007, at 8:24 AM, Joost Behrends wrote:

For adapting hws (one of the reasons for me to be here, not many  
languages have
a native web server) to Windows i must work on time. In System.Time  
i found


data ClockTime = TOD Integer Integer

2 questions arise here: Does this define TOD (which i do not find  
elsewhere)

together with ClockTime also ? And: Why is this not:

data ClockTime Integer Integer = TOD Integer Integer ?

Is it just an abbreviation for the first? Or is there a connection to
ClockTime as an abstract data type (a notion, which would have a  
subtle
different meaning than in OOP - since instance is such different  
thing

here).


You are right that it defines the TOD constructor. As for you second  
question, I will try to be somewhat formal in my response, so  
hopefully I don't just throw you off more. The quick answer is that  
since we already know the parameters on the right side are Integers,  
we don't need to specify them on the left side.


When you define datatypes, you are essentially defining a type-level  
constructors on the left hand side and (value-level) constructors on  
the right hand side. Just like normal functions, constructors and type  
constructors can be parameterized. Let's deviate for a moment from  
Haskell's notation for data types and approach this from the viewpoint  
of a dependently typed language (a language in which there is little  
separating between the type level and the value level). The data type  
we are defining here is called ClockTime, so its type might be  
represented as


 ClockTime :: *

, where * represents Kind, the type of types. For completeness, the  
sole constructor we define is called TOD and has type


 TOD :: Integer - Integer - ClockTime

. Now, let's say we had tried defining ClockTime with parameters as  
you suggested.


 ClockTime' :: Integer - Integer - *

Do you see the problem? In order to use the ClockTime type  
constructor, we would have to use Integer values. This (1) does not  
make much sense in a language like Haskell which doesn't have true  
dependent types, and (2) does not help us in any way with our  
definition of the TOD constructor. We already know by the definition  
of TOD that it takes two Integers and returns a ClockTime. If we used  
this modified definition of ClockTime, we would have to parameterize  
it to specify TOD, maybe like.


 TOD' :: Integer - Integer - ClockTime' 2 3

(I chose the 2 and 3 arbitrarily, but these exact values have no  
particular relevance here.) This would not work in Haskell.


However, there are cases where you would want to parameterize a type  
constructor. For example, say we _wanted_ our TOD constructor take two  
values of arbitrary types. If we want the type level to reflect the  
types of these parameters, ClockTime must be parameterized.


 ClockTime'' :: * - * - *
 TOD''   :: a - b - ClockTime'' a b

In Haskell notation, this would be equivalent to

 data ClockTime'' a b = TOD'' a b

. So, to summarize, the reason that we don't use

 data ClockTime Integer Integer = TOD Integer Integer

is because we don't want to parameterize ClockTime, and even if we  
did, we could not use Integer values like this to do it because  
Haskell is not dependently typed.


- Jake
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question concerning data constructors

2007-12-30 Thread David Menendez
On Dec 30, 2007 9:24 AM, Joost Behrends [EMAIL PROTECTED] wrote:

 A similar point: The tutorials teach, that = has a similar meaning than
 = in
 mathematics. But there is a big difference: it is not reflexive. The
 the right side is the definition of the left. Thus x=y has still some
 kind of
 temporality, which mathematics doesn't have. Wadler himself describes
 bunches
 of lazily computed equations as dataflows somewhere.


The = in the data declaration syntax is different from the = in value
and type declarations.

   type A = B

means that A can be used wherever B can be used.

   data A = B

means that B constructs a value of type A. The = acts more like the
::= in a BNF grammar. It is *not* a claim that A equals B, since A is a
type and B is a data constructor. Furthermore, types and data constructors
have disjoint namespaces, hence the common idiom of using the same name for
the type and the constructor when the type has only one constructor.

There is an alternative syntax for data declarations in recent versions of
GHC. Using it, you would write:

   data A where
   B :: A

This defines a type A, and a constructor B which has type A.

   data ClockTime where   TOD :: Integer - Integer - ClockTime

This defines a type ClockTime, and a constructor TOD which takes two
Integers and constructs a ClockTime.

   data Pair :: * - * - * where
   Pair :: a - b - Pair a b

This defines a type constructor Pair, which takes two types and constructs a
new type, and a constructor, also called Pair, which, for arbitrary types a
and b, takes a value of type a and a value of type b and constructs a value
of type Pair a b.

-- 
Dave Menendez [EMAIL PROTECTED]
http://www.eyrie.org/~zednenem/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question concerning data constructors

2007-12-30 Thread ajb

G'day all.

Quoting David Menendez [EMAIL PROTECTED]:


   data A = B

means that B constructs a value of type A. The = acts more like the
::= in a BNF grammar.


And, indeed, that was the syntax for it in Miranda.


It is *not* a claim that A equals B, since A is a
type and B is a data constructor.


Wrong.  It _is_ a claim that A equals B.  Or, rather, that the set of
values[*] A is defined as the least-fixpoint solution of the equation
A = B.

Think of this:

data IntList = Nil | Cons Int IntList

This corresponds to an equation:

IntList = { Nil } + { Cons } * Int * IntLIst

where plus denotes union (or disjoint union; either works in this
case) and star denotes Cartesian product.

The least fixpoint of this equation is precisely the set of values[*]
in IntList.


Furthermore, types and data constructors
have disjoint namespaces, hence the common idiom of using the same name for
the type and the constructor when the type has only one constructor.


I think that's the major source of the confusion here, yes.

Cheers,
Andrew Bromage

[*] Theoretical nit: It's not technically a set.

Consider the data type:

data Foo = Foo (Foo - Bool)

This declaration states that there's a bijection between the elements of
Foo and the elements of 2^Foo, which by Cantor's diagonal theorem cannot
be true for any set.  That's because we only allow computable functions,
and Foo - Bool is actually an exponential object in the category Hask.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question....

2007-08-17 Thread Brandon S. Allbery KF8NH


On Aug 17, 2007, at 9:11 , rodrigo.bonifacio wrote:


envKey :: EnvItem (Key, a) - String
envKey EnvItem (key, value) = key



envValue :: EnvValue(Key, a) - a
envValue EnvItem (key, value) = value


But this is resulting in the error: [Constructor EnvItem must  
have exactly 1 argument in pattern]


You need to parenthesize the constructor.

envValue (EnvItem (_,value)) = value

(The _ indicates that you're not using that item, rather than giving  
it a name that won't be used.)


Why do you need to do this?  Because you can pass functions around,  
and a constructor is a function.  But your type says you don't want a  
bare function there, so the compiler complains.


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question....

2007-08-17 Thread Janis Voigtlaender
- Ursprüngliche Nachricht -
Von: rodrigo.bonifacio [EMAIL PROTECTED]
Datum: Freitag, August 17, 2007 3:11 pm
Betreff: [Haskell-cafe] Basic question

 Hi all.
 
 I want to create the following polymorphic type (EnvItem) that we 
 can apply two functions (envKey and envValue). I tried the following:
 
  type Key = String
 
  data EnvItem a = EnvItem (Key, a)
 
  envKey :: EnvItem (Key, a) - String
  envKey EnvItem (key, value) = key
 
  envValue :: EnvValue(Key, a) - a
  envValue EnvItem (key, value) = value
 
 But this is resulting in the error: [Constructor EnvItem must 
 have exactly 1 argument in pattern]
 
 I think this is a very basic problem, but I don't know what is wrong.

You are simply missing some brackets:

 envKey :: EnvItem (Key, a) - String
 envKey (EnvItem (key, value)) = key

 envValue :: EnvValue(Key, a) - a
 envValue (EnvItem (key, value)) = value

Ciao, Janis.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question....

2007-08-17 Thread Chaddaï Fouché
Not only does you lack some parens around your patterns, your function
types are wrong :

type Key = String

data EnvItem a = EnvItem (Key, a)

envKey :: EnvItem a - String
envKey (EnvItem (key, value)) = key

envValue :: EnvItem a - a
envValue (EnvItem (key, value)) = value

-- 
Jedaï
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question....

2007-08-17 Thread Brent Yorgey
On 8/17/07, rodrigo.bonifacio [EMAIL PROTECTED] wrote:

 Hi all.

 I want to create the following polymorphic type (EnvItem) that we can
 apply two functions (envKey and envValue). I tried the following:

  type Key = String

  data EnvItem a = EnvItem (Key, a)

  envKey :: EnvItem (Key, a) - String
  envKey EnvItem (key, value) = key

  envValue :: EnvValue(Key, a) - a
  envValue EnvItem (key, value) = value

 But this is resulting in the error: [Constructor EnvItem must have
 exactly 1 argument in pattern]

 I think this is a very basic problem, but I don't know what is wrong.

 Regards,

 Rodrigo.


By the way, I would suggest giving the data type and constructor different
names:

data EnvItem a = EI (Key, a)

You do often see people use the same name for both, but that can be
confusing since they are really two different things.  The envKey function
(for example) would now be written like this:

envKey :: EnvItem a - Key
envKey (EI (key, _)) = key

The difference between the parameter type (EnvItem a) and a pattern to match
the shape of such a value (EI (key, _)) is now much clearer: whatever is on
the left side of the data declaration is the type, and goes in type
signatures; whatever is on the right side describes the shape of values of
that type, and is used to construct or deconstruct (through
pattern-matching) such values.  This way it is much harder to make mistakes
like (for example) putting EnvItem (Key, a) in the type signature instead of
EnvItem a.

-Brent
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Basic question....

2007-08-17 Thread Bryan Burgers
On 8/17/07, rodrigo.bonifacio [EMAIL PROTECTED] wrote:
 Hi all.

 I want to create the following polymorphic type (EnvItem) that we can apply 
 two functions (envKey and envValue). I tried the following:

  type Key = String

  data EnvItem a = EnvItem (Key, a)

  envKey :: EnvItem (Key, a) - String
  envKey EnvItem (key, value) = key

  envValue :: EnvValue(Key, a) - a
  envValue EnvItem (key, value) = value

 But this is resulting in the error: [Constructor EnvItem must have exactly 
 1 argument in pattern]

 I think this is a very basic problem, but I don't know what is wrong.

 Regards,

 Rodrigo.

In addition to what others have already said, I'd like to point out
that you do not really need a tuple in your data item.

 data EnvItem a = EI Key a

 envKey :: EnvItem a - Key
 envKey (EI key _) = key

 envValue :: EnvValue a - a
 envValue (EI _ value) = value

Also, you made a distinction between 'Key' and 'String', which is
good. But then in 'envKey', you used 'String' in the signature instead
of 'Key'.That's a little confusing, and also should you ever want to
change the representation of 'Key', you would then have to change the
signature of envKey.

Just my two cents,

Bryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe