Re: [Haskell-cafe] Object Oriented programming for Functional Programmers

2012-12-30 Thread Jay Sulzberger



On Sun, 30 Dec 2012, Daniel D??az Casanueva dhelta.d...@gmail.com wrote:


Hello, Haskell Cafe folks.

My programming life (which has started about 3-4 years ago) has always been
in the functional paradigm. Eventually, I had to program in Pascal and
Prolog for my University (where I learned Haskell). I also did some PHP,
SQL and HTML while building some web sites, languages that I taught to
myself. I have never had any contact with JavaScript though.

But all these languages were in my life as secondary languages, being
Haskell my predominant preference. Haskell was the first programming
language I learned, and subsequent languages never seemed so natural and
worthwhile to me. In fact, every time I had to use another language, I
created a combinator library in Haskell to write it (this was the reason
that brought me to start with the HaTeX library). Of course, this practice
wasn't always the best approach.

But, why I am writing this to you, haskellers?

Well, my curiosity is bringing me to learn a new general purpose
programming language. Haskellers are frequently comparing Object-Oriented
languages with Haskell itself, but I have never programmed in any
OO-language! (perhaps this is an uncommon case) I thought it could be good
to me (as a programmer) to learn C/C++. Many interesting courses (most of
them) use these languages and I feel like limited for being a Haskell
programmer. It looks like I have to learn imperative programming (with side
effects all over around) in some point of my programming life.

So my questions for you all are:

* Is it really worthwhile for me to learn OO-programming?

* If so, where should I start? There are plenty of functional programming
for OO programmers but I have never seen OO programming for functional
programmers.


There are several different things called object oriented
programming.  Here is what Alan Kay once said about C++:

  Actually I made up the term object-oriented, and I can tell
  you I did not have C++ in mind.

Above quote from

  http://en.wikiquote.org/wiki/Alan_Kay
  [page was last modified on 30 November 2012, at 16:06]

For me the most important things about objects are:

1. In the World of the Programming System there is a version of
   Lisp's eq?, ah that word is the Scheme word.

2. Really, objects are what are now called agents.

The word inheritance does not appear in the first 600^W300
pages of my Ideal Textbook on the Theory of Objects in
Programming.

oo--JS.




* Is it true that learning other programming languages leads to a better
use of your favorite programming language?

* Will I learn new programming strategies that I can use back in the
Haskell world?

Thanks in advance for your kind responses,
Daniel D??az.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Object Oriented programming for Functional Programmers

2012-12-30 Thread Jay Sulzberger



On Mon, 31 Dec 2012, MigMit miguelim...@yandex.ru wrote:


Well, functional programmer is a relatively broad term. If
you're coming from academia, so that for you Haskell is some
sort of lambda-calculus, spoiled by practical aspects, then I'd
suggest Luca Cardelli's book Theory of Objects.

Also, as Daniel told you already, don't start from C++, it


Name typo, should be Jay, noted.


really has very little to do with OOP. It's primary merit is a
very powerful system of macros (called templates in C++
world), not objects. If you want something mainstream, Java
would be a good choice, and C# even better one (although it
would be more convenient for you if you use Windows).

Or you can try OCaml, which is functional enough for you not to
feel lost, and object-oriented as well.

 ?? iPad


For systems to look at I recommend Simula, some early version,
Smalltalk, Common Lisp, and Erlang.  My guess is that Haskell's
type classes are a mechanism for creating something like Common
Lisp's generic functions.  I know too little about Haskell to
say whether type classes immediately give you single dispatch
things, or multiple dispatch things.

These two Wikipedia articles are useful, I think:

  http://en.wikipedia.org/wiki/Generic_function
  [page was last modified on 15 November 2012 at 03:50]

  http://en.wikipedia.org/wiki/Common_Lisp_Object_System
  [page was last modified on 15 December 2012 at 23:57]

The Diamond Problem and its cousin(s) are worth looking at:

  http://en.wikipedia.org/wiki/Diamond_problem#The_diamond_problem
  [page was last modified on 27 December 2012 at 04:53]

  http://www.ibm.com/developerworks/java/library/j-clojure-protocols/

  
http://stackoverflow.com/questions/4509782/simple-explanation-of-clojure-protocols

oo--JS.



30.12.2012, ?? 23:58, Daniel D??az Casanueva dhelta.d...@gmail.com 
??(??):


Hello, Haskell Cafe folks.

My programming life (which has started about 3-4 years ago) has always been in the functional paradigm. Eventually, I had to program in Pascal and Prolog for my University (where I learned Haskell). I also did some PHP, SQL and HTML while building some web sites, languages that I taught to myself. I have never had any contact with JavaScript though. 


But all these languages were in my life as secondary languages, being Haskell 
my predominant preference. Haskell was the first programming language I 
learned, and subsequent languages never seemed so natural and worthwhile to me. 
In fact, every time I had to use another language, I created a combinator 
library in Haskell to write it (this was the reason that brought me to start 
with the HaTeX library). Of course, this practice wasn't always the best 
approach.

But, why I am writing this to you, haskellers?

Well, my curiosity is bringing me to learn a new general purpose programming 
language. Haskellers are frequently comparing Object-Oriented languages with 
Haskell itself, but I have never programmed in any OO-language! (perhaps this 
is an uncommon case) I thought it could be good to me (as a programmer) to 
learn C/C++. Many interesting courses (most of them) use these languages and I 
feel like limited for being a Haskell programmer. It looks like I have to learn 
imperative programming (with side effects all over around) in some point of my 
programming life.

So my questions for you all are:

* Is it really worthwhile for me to learn OO-programming?

* If so, where should I start? There are plenty of functional programming for OO 
programmers but I have never seen OO programming for functional programmers.

* Is it true that learning other programming languages leads to a better use of 
your favorite programming language?

* Will I learn new programming strategies that I can use back in the Haskell 
world?

Thanks in advance for your kind responses,
Daniel D??az.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Added CLOS reference, was: Re: Object Oriented programming for Functional Programmers

2012-12-30 Thread Jay Sulzberger

This page looks to be a good introduction to CLOS:

  http://www.gigamonkeys.com/book/object-reorientation-generic-functions.html

oo--JS.


On Sun, 30 Dec 2012, Jay Sulzberger wrote:




On Mon, 31 Dec 2012, MigMit miguelim...@yandex.ru wrote:


Well, functional programmer is a relatively broad term. If
you're coming from academia, so that for you Haskell is some
sort of lambda-calculus, spoiled by practical aspects, then I'd
suggest Luca Cardelli's book Theory of Objects.

Also, as Daniel told you already, don't start from C++, it


Name typo, should be Jay, noted.


really has very little to do with OOP. It's primary merit is a
very powerful system of macros (called templates in C++
world), not objects. If you want something mainstream, Java
would be a good choice, and C# even better one (although it
would be more convenient for you if you use Windows).

Or you can try OCaml, which is functional enough for you not to
feel lost, and object-oriented as well.

 ?? iPad


For systems to look at I recommend Simula, some early version,
Smalltalk, Common Lisp, and Erlang.  My guess is that Haskell's
type classes are a mechanism for creating something like Common
Lisp's generic functions.  I know too little about Haskell to
say whether type classes immediately give you single dispatch
things, or multiple dispatch things.

These two Wikipedia articles are useful, I think:

 http://en.wikipedia.org/wiki/Generic_function
 [page was last modified on 15 November 2012 at 03:50]

 http://en.wikipedia.org/wiki/Common_Lisp_Object_System
 [page was last modified on 15 December 2012 at 23:57]

The Diamond Problem and its cousin(s) are worth looking at:

 http://en.wikipedia.org/wiki/Diamond_problem#The_diamond_problem
 [page was last modified on 27 December 2012 at 04:53]

 http://www.ibm.com/developerworks/java/library/j-clojure-protocols/

 
http://stackoverflow.com/questions/4509782/simple-explanation-of-clojure-protocols

oo--JS.



30.12.2012, ?? 23:58, Daniel D??az Casanueva dhelta.d...@gmail.com 
??(??):



Hello, Haskell Cafe folks.

My programming life (which has started about 3-4 years ago) has always been 
in the functional paradigm. Eventually, I had to program in Pascal and 
Prolog for my University (where I learned Haskell). I also did some PHP, 
SQL and HTML while building some web sites, languages that I taught to 
myself. I have never had any contact with JavaScript though. 
But all these languages were in my life as secondary languages, being 
Haskell my predominant preference. Haskell was the first programming 
language I learned, and subsequent languages never seemed so natural and 
worthwhile to me. In fact, every time I had to use another language, I 
created a combinator library in Haskell to write it (this was the reason 
that brought me to start with the HaTeX library). Of course, this practice 
wasn't always the best approach.


But, why I am writing this to you, haskellers?

Well, my curiosity is bringing me to learn a new general purpose 
programming language. Haskellers are frequently comparing Object-Oriented 
languages with Haskell itself, but I have never programmed in any 
OO-language! (perhaps this is an uncommon case) I thought it could be good 
to me (as a programmer) to learn C/C++. Many interesting courses (most of 
them) use these languages and I feel like limited for being a Haskell 
programmer. It looks like I have to learn imperative programming (with side 
effects all over around) in some point of my programming life.


So my questions for you all are:

* Is it really worthwhile for me to learn OO-programming?

* If so, where should I start? There are plenty of functional programming 
for OO programmers but I have never seen OO programming for functional 
programmers.


* Is it true that learning other programming languages leads to a better 
use of your favorite programming language?


* Will I learn new programming strategies that I can use back in the 
Haskell world?


Thanks in advance for your kind responses,
Daniel D??az.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Categories (cont.)

2012-12-21 Thread Jay Sulzberger



On Fri, 21 Dec 2012, Chris Smith cdsm...@gmail.com wrote:


It would definitely be nice to be able to work with a partial Category
class, where for example the objects could be constrained to belong to a
class.  One could then restrict a Category to a type level representation
of the natural numbers or any other desired set.  Kind polymorphism should
make this easy to define, but I still don't have a good feel for whether it
is worth the complexity.


Indeed this sort of thing can obviously be done.  But it will
require some work.  The question is where, when, and how much
effort, which may mean money, it will take.  One encouraging thing
is that recently more people understand that type
checking/inference in the style of ML/Haskell/etc. is not so
hard, and that general constraint solvers/SMT systems can do the
job.  Getting the compiler to make use of the results of such type
estimates/assignments is the hard part today I think.

Last night I discovered the best blurb for the program:

  http://www.cis.upenn.edu/~sweirich/plmw12/Slides/plmw12-Pierce.pdf

via the subReddit:

  http://www.reddit.com/r/dependent_types/

oo--JS.



On Dec 21, 2012 6:37 AM, Tillmann Rendel ren...@informatik.uni-marburg.de
wrote:


Hi,

Christopher Howard wrote:


instance Category ...



The Category class is rather restricted:

Restriction 1:
You cannot choose what the objects of the category are. Instead, the
objects are always all Haskell types. You cannot choose anything at all
about the objects.

Restriction 2:
You cannot freely choose what the morphisms of the category are. Instead,
the morphisms are always Haskell values. (To some degree, you can choose
*which* values you want to use).


These restrictions disallow many categories. For example, the category
where the objects are natural numbers and there is a morphism from m to n
if m is greater than or equal to n cannot be expressed directly: Natural
numbers are not Haskell types; and is bigger than or equal to is not a
Haskell value.

  Tillmann

__**_
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] category design approach for inconvenient concepts

2012-12-20 Thread Jay Sulzberger



On Thu, 20 Dec 2012, Oleksandr Manzyuk manz...@gmail.com wrote:


the category of Haskell types and Haskell functions[1]

[1] Note that this may not actually work out to be a category, but the basic
idea is sound.


I would be curious to see this example carefully worked out.  I often
hear that Haskell types and Haskell functions constitute a category,
but I have seen no rigorous definition.

I have no problems with the statement Objects of the category Hask
are Haskell types.  Types are well-defined syntactic entities.  But
what is a morphism in the category Hask from a to b?  Commonly, people
say functions from a to b or functions a - b, but what does that
mean?  What is a function as a mathematical object?  It is a plausible
idea to say that a function from a to b is a closed term of type a -
b (and terms are again well-defined syntactic entities).  How do we
define composition?  Presumably, by

f . g = \x - f (g x)

This however already presupposes that we are dealing not with raw
terms, but with their alpha-equivalence classes (otherwise the above
is not well-defined as it depends on the choice of the variable x).
Even if we mod out alpha-equivalence, so defined composition fails to
be associative on the nose, up to equality of (alpha-equivalence
classes of) terms.  Apparently, we want to consider equivalence
classes of terms modulo some finer equivalence relation.  What is this
equivalence relation?  Some kind of definitional equality?

Apparently, this (rather non-trivial) exercise has already been
carried out for the simply typed lambda-calculus.  I'd be curious to
see how that generalizes to Haskell (or some equivalent formal
system).

Sasha


Yes.  It would be well worth carefully carrying out your program
for some approximation of a large part of Haskell as she lives in
GHC.  As mentioned earlier, we should not ignore the distinctions
between

  a. the text of a Haskell program
  b. the binary of the now compiled program
  c. the running of the program
  d. the input output behavior of the program

Attempting to force the hoped for clarification to operate only
on one part of the whole at least four part structure is likely
to not give us what we, ah, I, really want to see.

There is some work directly dealing with part of the program:

  http://www.haskell.org/haskellwiki/Hask

oo--JS.



--
Oleksandr Manzyuk
http://oleksandrmanzyuk.wordpress.com


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Categories (cont.)

2012-12-20 Thread Jay Sulzberger



On Thu, 20 Dec 2012, Christopher Howard christopher.how...@frigidcode.com 
wrote:


I've perhaps been trying everyones patiences with my noobish CT
questions, but if you'll bear with me a little longer: I happened to
notice that there is in fact a Category class in Haskell base, in
Control.Category:

quote:

class Category cat where

A class for categories. id and (.) must form a monoid.

Methods

id :: cat a a

the identity morphism



Here we run into at least one general phenomenon, which wren ng thornton
discusses in this thread.  One phenomenon is:

1. Different formalizations of the same concept, here category,
strike the student when they are first seen, as completely
different things.  In particular, different formalisms often have
different types, where by types here, we mean types in the
implicit type system the student assumes.  The Haskell
declaration

  id :: cat a a

declares that id is an element of type (cat a a), that is, that
given any (suitable) type a, there is an element which we call
id of the type (cat a a).  Here (cat a a) might be read as the
type of all morphisms between an element of type *anything* and
another element of type *anything*, where the two types are
the same.  Now in most category theory textbooks we have an axiom

  For each object obj in the category, we have a morphism
  identity(obj): obj - obj
  That is, we have a map defined on Obj the set of objects
  of our category, which takes values in the Mor, the (disjoint)
  union of Mor(a,b) over all objects of our category.

One natural-to-the-beginner idea is that to do a
translation^Winterpretation of this into Haskell, we would need a
a Haskell procedure defined on (approximately) all types a which,
once we fix our category C, will hand us back a procedure of type
(C a a).  Note that this Identity procedure takes as input a type
and hands back a lower level thing, namely a value of type
(C a a).  So the type of Identity in our approximation of Haskell
would be:

  * - (C * *)

where we have the constraint

  All the textual *s appearing in above line,
  refer to the same type

Now, I am a beginner in Haskell, and I am not sure whether we can
make such a declaration in Haskell.  In my naive type system
(id :: cat a a) gives id a different type from Identity.
Identity takes one input, patently, but id seems to take no
inputs.  Admittedly we may pass easily by means of a functor
(imprecision here, what are the two categories for this functor?)
from id to Identity, and by another functor, back.  I do think
that Haskell's handling of universally polymorphic types does
indeed provide for automatic, behind the source code, application
of these two functors.

To be painfully explicit: (id :: cat a a) says, in my naive type
theory, that id is a name for some particular element of
(cat a a).  Identity(a) is the result of applying Identity to the
type a.  A name is at a different level from the thing named, in
my naive type theory.

2. The above is a tiny example of the profusion of swift
apparently impossible conflations and swift implicit, and often
also explicit, distinctions which are sometimes offered in
response to the beginner's puzzlement.


(.) :: cat b c - cat a b - cat a c

morphism composition


However, the documentation lists only two instances of Category,
functions (-) and Kleisli Monad. For instruction purposes, could
someone show me an example or two of how to make instances of this
class, perhaps for a few of the common types? My initial thoughts were
something like so:

code:

instance Category Integer where

 id = 1

 (.) = (*)

-- and

instance Category [a] where

 id = []
 (.) = (++)
---

But these lead to kind mis-matches.

--
frigidcode.com


Ah, OK, let us actually apply some functors.  I shall make some
mistakes in Haskell, I am sure, but the functors are not due to
me, are well known, and I believe, debugged:

Let us rewrite


instance Category Integer where

 id = 1

 (.) = (*)


as


instance Nearcat0 Integer where

 id = 1

 (.) = (*)


This is surely a category, ah, well just about, after we apply
some functor^Wtransformation.  What Nearcat0 is a Haskell thing,
ah, I just now see wren's explication, with Haskell code, in which,
I think Nearcat0 Integer is a thing of type Monoid in Haskell.  I
do not know what a phantom type is, but without the constraint
of having to produce a Haskell interpretation, let us just repeat
the standard category theory textbook explication:

A monoid may be seen as a category as follows:

Let M be a monoid with constant 1, and multiplication *.

Then we may define a category C with one object, which object we
will call, say, theobj.  To each element m of the monoid, we
define a morphism cat(m) in Mor(C) such that

  head(cat(m)) = theobj
  tail(cat(m)) = theobj

and for all m, n in the monoid

  cat(m) * cat(n) = cat(m * n)

where we have written * to mean the composition of morphisms
in C.  Note that once we have specified that C has 

Re: [Haskell-cafe] Categories (cont.)

2012-12-20 Thread Jay Sulzberger



On Thu, 20 Dec 2012, Christopher Howard christopher.how...@frigidcode.com 
wrote:


On 12/20/2012 03:59 AM, wren ng thornton wrote:

On 12/20/12 6:42 AM, Christopher Howard wrote:

As mentioned in my other email (just posted) the kind mismatch is
because categories are actually monoid-oids[1] not monoids. That is:

class Monoid (a :: *) where
mempty  :: a
mappend :: a - a - a

class Category (a :: * - * - *) where
id  :: a i j
(.) :: a j k - a i j - a i k

Theoretically speaking, every monoid can be considered as a category
with only one object. Since there's only one object/index, the types for
id and (.) basically degenerate into the types for mempty and mappend.
Notably, from this perspective, each of the elements of the carrier set
of the monoid becomes a morphism in the category--- which some people
find odd at first.

In order to fake this theory in Haskell we can do:

newtype MonoidCategory a i j = MC a

instance Monoid a = Category (MonoidCategory a) where
id  = MC mempty
MC f . MC g = MC (f `mappend` g)

This is a fake because technically (MonoidCategory A X Y) is a different
type than (MonoidCategory A P Q), but since the indices are phantom
types, we (the programmers) know they're isomorphic. From the category
theory side of things, we have K*K many copies of the monoid where K is
the cardinality of the kind *. We can capture this isomorphism if we
like:

castMC :: MonoidCategory a i j - MonoidCategory a k l
castMC (MC a) = MC a

but Haskell won't automatically insert this coercion for us; we gotta do
it manually. In more recent versions of GHC we can use data kinds in
order to declare a kind like:

MonoidCategory :: * - () - () - *

which would then ensure that we can only talk about (MonoidCategory a ()
()). Unfortunately, this would mean we can't use the Control.Category
type class, since this kind is more restrictive than (* - * - * - *).
But perhaps in the future that can be fixed by using kind polymorphism...


[1] The -oid part just means the indexing. We don't use the term
monoidoid because it's horrific, but we do use a bunch of similar
terms like semigroupoid, groupoid, etc.



Finally... I actually made some measurable progress, using these
phantom types you mentioned:

code:

import Control.Category

newtype Product i j = Product Integer

 deriving (Show)

instance Category Product where

 id = Product 1

 Product a . Product b = Product (a * b)


I can do composition, illustrate identity, and illustrate associativity:

code:

h Product 5  Product 2
Product 10

h Control.Category.id (Product 3)
Product 3

h Control.Category.id  Product 3
Product 3
h Product 3  Control.Category.id
Product 3

h (Product 2  Product 3)  Product 5
Product 30
h Product 2  (Product 3  Product 5)
Product 30



Thank you for this code!

What does the code for going backwards looks like?  That is,
suppose we have an instance of Category with only one object.
What is the Haskell code for the function which takes the
category instance and produces a monoid thing, like your integers
with 1 and usual integer multiplication?  Could we use a
constraint at the level of types, or at some other level, to
write the code?  Here by constraint I mean something like a
declaration that is a piece of Haskell source code, and not
something the human author of the code uses to write the code.

Maybe Categorical Programming for Data Types with Restricted
Parametricity by D. Orchard and Alan Mycroft

  http://www.cl.cam.ac.uk/~dao29/drafts/tfp-structures-orchard12.pdf

has something to do with this.

oo--JS.




--
frigidcode.com




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Categories (cont.)

2012-12-20 Thread Jay Sulzberger



On Thu, 20 Dec 2012, Gershom Bazerman gersh...@gmail.com wrote:


On 12/20/12 10:05 PM, Jay Sulzberger wrote:

What does the code for going backwards looks like?  That is,
suppose we have an instance of Category with only one object.
What is the Haskell code for the function which takes the
category instance and produces a monoid thing, like your integers
with 1 and usual integer multiplication?  Could we use a
constraint at the level of types, or at some other level, to
write the code?  Here by constraint I mean something like a
declaration that is a piece of Haskell source code, and not
something the human author of the code uses to write the code.

instance C.Category k = Monoid (k a a) where
   mempty = C.id
   mappend = (C..)

The above gives witness to the fact that, if I'm using the language 
correctly, if we choose any object (our a) in any given category, this 
induces a monoid with the identity morphism as unit and composition of 
endomorphisms as append.


The standard libraries in fact provide this instance for the function arrow 
category (under a newtype wrapper):


newtype Endo a = Endo { appEndo :: a - a }

instance Monoid (Endo a) where
   mempty = Endo id
   Endo f `mappend` Endo g = Endo (f . g)

--Gershom


Thanks, Gershom!

I think I see.  The Haskell code picks out the
isotropy/holonomy monoid at the object a of any Haskell
Category instance.

actual old fashioned types remark: To get the holonomy
semigroup^Wmonoid, interpolate a functor.

I am glad that Haskell today smoothly handles this.

ad paper on polymorphisms: I hope to post a rant against the
misleading distinction between parametric polymorphism and ad
hoc polymorphism.  Lisp will be used as a bludgeon in the only
argument in the rant.  The Four Things Which Must Be
Distinguished will perform the opening number.

oo--JS.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] category design approach for inconvenient concepts

2012-12-18 Thread Jay Sulzberger



On Tue, 18 Dec 2012, Christopher Howard wrote:


On 12/17/2012 06:30 PM, Richard O'Keefe wrote:


On 18/12/2012, at 3:45 PM, Christopher Howard wrote:

It's basically the very old idea that an Abstract Data Type
should be a nice algebra: things that look as though they
ought to fit together should just work, and rearrangements
of things ought not to change the semantics in surprising
ways (i.e., Don't Be SQL).



Categories contain two things:
objects
and arrows that connect objects.  Some important things about arrows:
 - for any object x there must be an identity idx : x - x
 - any two compatible arrows must compose in one and only one way:
   f : x - y  g : y - z  =  g . f : x - z
 - composition must be associative (f . g) . h = f . (g . h)
   when the arrows fit together.

Of course for any category there is a dual category,
so what I'm about to say doesn't really make sense,
but there's sense in it somewhere:  the things you are
trying to hook together with your . operator seem to me more
like objects than arrows, and it does not seem as if
they hook together in one and only one way, so it's not so
much _associativity_ being broken, as the idea of . being
a composition in the first place.




Since I received the two responses to my question, I've been trying to
think deeply about this subject, and go back and understand the core
ideas. I think the problem is that I really don't have a clear
understanding of the basics of category theory, and even less clear idea
of the connection to Haskell programming. I have been reading every link
I can find, but I'm still finding the ideas of objects and especially
morphisms to be quite vague.


Much discussion that I have seen of categories and Haskell is
imprecise.  It is often possible to convey some fact, or point of
view, using imprecise language, but in many cases, the
communication will fail, unless the reader has a solid
understanding of the basics.  Getting this understanding requires
studying harsh and, often, off putting, textbooks.

I will say two things:

1. Usually, to understand category theory a firm grasp of the
concept structure is required.

2. To connect categories with Haskell requires some apparatus,
which apparatus should be laid out precisely, at least once in
the exposition.

ad 1: By a structure I mean what Bourbaki calls a structure.
  See:

  http://en.wikipedia.org/wiki/Mathematical_structure
  [page was last modified on 7 August 2012 at 17:15]

  http://en.wikipedia.org/wiki/Structure_%28mathematical_logic%29
  [page was last modified on 15 December 2012 at 19:36]

  http://en.wikipedia.org/wiki/Algebraic_structure
  [page was last modified on 11 December 2012 at 02:51]

ad 2: The tutorial should carefully distinguish these different
things:

  a. the text of a Haskell program
  b. the binary of the now compiled program
  c. the running of the program
  d. the input output behavior of the program

Each of these gives rise to at least one category, and there are
various functors among these categories.

Set theory, say ZFC style, and New Crazy Type Theory, offer two
different Backround Mechanisms to explicate the notion of
structure.  There are similarities between these two Grand
Theories, but there are also political differences.

ad seeming vagueness of the notions object and morphism: This
vagueness, which is felt by all students at first, is often
explicated/excused by saying that the notions are more abstract
than say, the notions of integer and addition of integers and
multiplication of integers.  This way of speaking is not
entirely wrong, but I think it mainly wrong and misleading.
Bourbaki somewhere says something like:

  Recent mathematics characteristically differs from older
  mathematics in that our axiom systems usually have more than
  one model, whereas in the old mathematics, theories usually had
  only one model.

Here is how this explicates the sentence:

   The theory of rings is more abstract than the theory of
   addition and multiplication of the integers.

We all know the integers.  The integers form a set, with such
elements as 0, 1, 17, -345, and so on.  We have learned two
operations on this single set: addition and multiplication.  This
understanding is a concrete understanding of a single concrete
object, which does indeed have parts, such as -345, and the
operation +, for example, but all the statements we might make
can, in some sense, be settled by looking at this single
structure and its various parts.  Or so we feel.  (Note in the
sentence in which there are two occurences of the word
concrete, the two occurences must have different sense.)

But the theory of rings is quite a different theory.  There are
many different structures which are studied in the theory of
rings.  For example the ring of integers is one such structures.
There is also the ring of complex numbers.  There is also the
ring of 2 x 2 matrices over the integers.  Another ring is the
ring of 3 x 3 

Re: [Haskell-cafe] Taking over ghc-core

2012-11-16 Thread Jay Sulzberger



On Fri, 16 Nov 2012, Carter Schonwald carter.schonw...@gmail.com wrote:


how would ghc-core enable targetting core for Agda?


On Wed, Nov 14, 2012 at 6:32 PM, Andreas Abel andreas.a...@ifi.lmu.dewrote:


Excellent!

With ghc-core being maintained again, we can start thinking about
compiling Agda to core instead of hs.

Andreas


I would like to be able to take the textual version of Core
output by GHC and use that text as input to the next stage of
compilation.

oo--JS.





On 11.11.12 11:41 AM, Bas van Dijk wrote:


Great!

On 10 November 2012 16:17, Shachaf Ben-Kiki shac...@gmail.com wrote:


With Don Stewart's blessing
(https://twitter.com/donsbot/**status/267060717843279872https://twitter.com/donsbot/status/267060717843279872),
I'll be
taking over maintainership of ghc-core, which hasn't been updated
since 2010. I'll release a version with support for GHC 7.6 later
today.

 Shachaf



__**_
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe



--
Andreas AbelDu bist der geliebte Mensch.

Theoretical Computer Science, University of Munich
Oettingenstr. 67, D-80538 Munich, GERMANY

andreas.a...@ifi.lmu.de
http://www2.tcs.ifi.lmu.de/~**abel/ http://www2.tcs.ifi.lmu.de/~abel/


__**_
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Taking over ghc-core

2012-11-16 Thread Jay Sulzberger



On Fri, 16 Nov 2012, Brent Yorgey byor...@seas.upenn.edu wrote:


On Fri, Nov 16, 2012 at 03:25:35PM -0500, Jay Sulzberger wrote:



On Fri, 16 Nov 2012, Carter Schonwald carter.schonw...@gmail.com wrote:


how would ghc-core enable targetting core for Agda?


On Wed, Nov 14, 2012 at 6:32 PM, Andreas Abel andreas.a...@ifi.lmu.dewrote:


Excellent!

With ghc-core being maintained again, we can start thinking about
compiling Agda to core instead of hs.

Andreas


I would like to be able to take the textual version of Core
output by GHC and use that text as input to the next stage of
compilation.

oo--JS.


Note that the ghc-core package only does pretty-printing of GHC core.
Whether GHC can parse a textual representation of GHC core (like Jay
and, presumably, Andreas want) is unrelated to the ghc-core package.

-Brent


Thanks, Brent!

My post was based on ignorance.

OK, I would like to suggest a talk at NYHUG on the GHC pipeline.
Else I might look at it without the support of NYHUGgers and
drink.

oo--JS.




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Reminder and Forward: Wednesday 14 November 2012 NYHUG Inaugural Meetup: Ozgun Ataman on Practical Data Processing and Gershom Bazerman on Putting Cloud Haskell to Work

2012-11-12 Thread Jay Sulzberger
 of the generosity of Pivotal Labs, and after the
 talks, we're planning to keep the discussion going over food and drink
 at a nearby establishment.

/blockquote


Distributed poC TINC:

Jay Sulzberger secret...@lxny.org
Corresponding Secretary LXNY
LXNY is New York's Free Computing Organization.
http://www.lxny.org

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Haskell] ANN: HERMIT for GHC 7.6

2012-09-25 Thread Jay Sulzberger

[Note that original post was on the Haskell list, and not the
Haskell-cafe list.]

On Tue, 25 Sep 2012, Andrew Farmer afar...@ittc.ku.edu wrote:


A new version of HERMIT (0.1.2.0) that is compatible with GHC 7.6 is now
available on hackage. This release has essentially the same functionality
as the 7.4-only pre-alpha release made before ICFP 2012.

HERMIT (Haskell Equational Reasoning Model-to-Implementation Tunnel) is a
plugin for GHC that provides an interactive interface for applying
transformations directly to GHC's internal intermediate language, Core.
This plugin is part of a larger HERMIT toolkit, which is being developed at
the University of Kansas with the aims of supporting equational reasoning
and allowing custom optimizations to be applied without modifying either
GHC or the Haskell source code.

Introduction to HERMIT via Neil Sculthorpe's Haskell Symposium 2012 talk:

http://www.youtube.com/watch?v=x2QH3jJCJso

Example transformations can be found in the examples folder, contained in
the cabal source package.

Hackage: http://hackage.haskell.org/package/hermit
Github (for bug reports): https://github.com/ku-fpg/hermit

Andrew Farmer


Thank you and all the HERMIT Team for this!

Does the HERMIT system provide an executable Haskell Core, that
is, we can get an ordinary text file which can be executed by
some version of GHC+HERMIT?

oo--JS.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Church vs Boehm-Berarducci encoding of Lists

2012-09-20 Thread Jay Sulzberger



On Thu, 20 Sep 2012, o...@okmij.org wrote:



Dan Doel wrote:

P.S. It is actually possible to write zip function using Boehm-Berarducci
encoding:
http://okmij.org/ftp/Algorithms.html#zip-folds


If you do, you might want to consider not using the above method, as I
seem to recall it doing an undesirable amount of extra work (repeated
O(n) tail).

It is correct. The Boehm-Berarducci web page discusses at some extent
the general inefficiency of the encoding, the need for repeated
reflections and reifications for some (but not all) operations. That
is why arithmetic on Church numerals is generally a bad idea.

A much better encoding of numerals is what I call P-numerals
http://okmij.org/ftp/Computation/lambda-calc.html#p-numerals
It turns out, I have re-discovered them after Michel Parigot (so my
name P-numerals is actually meaningful). Not only they are faster; one
can _syntactically_ prove that PRED . SUCC is the identity.


What is the setup that, here, gives the distinction between a
syntactic proof and some other kind of proof?

oo--JS.




The general idea of course is Goedel's recursor R.

  R b a 0 = a
  R b a (Succ n) = b n (R b a n)

which easily generalizes to lists. The enclosed code shows the list
encoding that has constant-time cons, head, tail and trivially
expressible fold and zip.


Kim-Ee Yeoh wrote:

So properly speaking, tail and pred for Church-encoded lists and nats
are trial-and-error affairs. But the point is they need not be if we
use B-B encoding, which looks _exactly_ the same, except one gets a
citation link to a systematic procedure.

So it looks like you're trying to set the record straight on who actually
did what.


Exactly. Incidentally, there is more than one way to build a
predecessor of Church numerals. Kleene's solution is not the only
one. Many years ago I was thinking on this problem and designed a
different predecessor:

excerpted from http://okmij.org/ftp/Haskell/LC_neg.lhs


   One ad hoc way of defining a predecessor of a positive numeral
predp cn+1 == cn
   is to represent predp cn as cn f v
   where f and v are so chosen that (f z) acts as
if z == v then c0 else (succ z)
   We know that z can be either a numeral cn or a special value v. All
   Church numerals have a property that (cn combI) is combI: the identity
   combinator is a fixpoint of every numeral. Therefore, ((cn combI) (succ
   cn)) reduces to (succ cn). We only need to choose the value v in such
   a way that ((v I) (succ v)) yields c0.

predp = eval $
  c ^ c
   # (z ^ (z # combI # (succ # z)))   -- function f(z)
   # (a ^ x ^ c0) -- value v


{-# LANGUAGE Rank2Types #-}

-- List represented with R

newtype R x = R{unR :: forall w.
 -- b
 (x - R x - w - w)
 -- a
 - w
 -- result
 - w}

nil :: R x
nil = R (\b a - a)

-- constant type
cons :: x - R x - R x
cons x r = R(\b a - b x r (unR r b a))

-- constant time
rhead :: R x - x
rhead (R fr) = fr (\x _ _ - x) (error head of the empty list)

-- constant time
rtail :: R x - R x
rtail (R fr) = fr (\_ r _ - r) (error tail of the empty list)

-- fold
rfold :: (x - w - w) - w - R x - w
rfold f z (R fr) = fr (\x _ w - f x w) z

-- zip is expressed via fold
rzipWith :: (x - y - z) - R x - R y - R z
rzipWith f r1 r2 =  rfold f' z r1 r2
where f' x tD = \r2 - cons (f x (rhead r2)) (tD (rtail r2))
  z   = \_  - nil

-- tests

toR :: [a] - R a
toR = foldr cons nil

toL :: R a - [a]
toL = rfold (:) []


l1 = toR [1..10]
l2 = toR abcde


t1 = toL $ rtail l2
-- bcde

t2 = toL $ rzipWith (,) l2 l1
-- [('a',1),('b',2),('c',3),('d',4),('e',5)]



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Church vs Boehm-Berarducci encoding of Lists

2012-09-20 Thread Jay Sulzberger



On Thu, 20 Sep 2012, Jay Sulzberger wrote:




On Thu, 20 Sep 2012, o...@okmij.org wrote:



Dan Doel wrote:
P.S. It is actually possible to write zip function using 
Boehm-Berarducci

encoding:
http://okmij.org/ftp/Algorithms.html#zip-folds


If you do, you might want to consider not using the above method, as I
seem to recall it doing an undesirable amount of extra work (repeated
O(n) tail).

It is correct. The Boehm-Berarducci web page discusses at some extent
the general inefficiency of the encoding, the need for repeated
reflections and reifications for some (but not all) operations. That
is why arithmetic on Church numerals is generally a bad idea.

A much better encoding of numerals is what I call P-numerals
http://okmij.org/ftp/Computation/lambda-calc.html#p-numerals
It turns out, I have re-discovered them after Michel Parigot (so my
name P-numerals is actually meaningful). Not only they are faster; one
can _syntactically_ prove that PRED . SUCC is the identity.


What is the setup that, here, gives the distinction between a
syntactic proof and some other kind of proof?

oo--JS.


Ah, I have just read the for-any vs for-all part of

  http://okmij.org/ftp/Computation/lambda-calc.html#p-numerals

and I think I understand something.

oo--JS.







The general idea of course is Goedel's recursor R.

  R b a 0 = a
  R b a (Succ n) = b n (R b a n)

which easily generalizes to lists. The enclosed code shows the list
encoding that has constant-time cons, head, tail and trivially
expressible fold and zip.


Kim-Ee Yeoh wrote:

So properly speaking, tail and pred for Church-encoded lists and nats
are trial-and-error affairs. But the point is they need not be if we
use B-B encoding, which looks _exactly_ the same, except one gets a
citation link to a systematic procedure.

So it looks like you're trying to set the record straight on who actually
did what.


Exactly. Incidentally, there is more than one way to build a
predecessor of Church numerals. Kleene's solution is not the only
one. Many years ago I was thinking on this problem and designed a
different predecessor:

excerpted from http://okmij.org/ftp/Haskell/LC_neg.lhs


   One ad hoc way of defining a predecessor of a positive numeral
predp cn+1 == cn
   is to represent predp cn as cn f v
   where f and v are so chosen that (f z) acts as
if z == v then c0 else (succ z)
   We know that z can be either a numeral cn or a special value v. All
   Church numerals have a property that (cn combI) is combI: the identity
   combinator is a fixpoint of every numeral. Therefore, ((cn combI) (succ
   cn)) reduces to (succ cn). We only need to choose the value v in such
   a way that ((v I) (succ v)) yields c0.

predp = eval $
  c ^ c
   # (z ^ (z # combI # (succ # z)))   -- function f(z)
   # (a ^ x ^ c0) -- value v


{-# LANGUAGE Rank2Types #-}

-- List represented with R

newtype R x = R{unR :: forall w.
 -- b
 (x - R x - w - w)
 -- a
 - w
 -- result
 - w}

nil :: R x
nil = R (\b a - a)

-- constant type
cons :: x - R x - R x
cons x r = R(\b a - b x r (unR r b a))

-- constant time
rhead :: R x - x
rhead (R fr) = fr (\x _ _ - x) (error head of the empty list)

-- constant time
rtail :: R x - R x
rtail (R fr) = fr (\_ r _ - r) (error tail of the empty list)

-- fold
rfold :: (x - w - w) - w - R x - w
rfold f z (R fr) = fr (\x _ w - f x w) z

-- zip is expressed via fold
rzipWith :: (x - y - z) - R x - R y - R z
rzipWith f r1 r2 =  rfold f' z r1 r2
where f' x tD = \r2 - cons (f x (rhead r2)) (tD (rtail r2))
  z   = \_  - nil

-- tests

toR :: [a] - R a
toR = foldr cons nil

toL :: R a - [a]
toL = rfold (:) []


l1 = toR [1..10]
l2 = toR abcde


t1 = toL $ rtail l2
-- bcde

t2 = toL $ rzipWith (,) l2 l1
-- [('a',1),('b',2),('c',3),('d',4),('e',5)]






___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Pointer to clarifying page Hask for beginners who take too naively claims of rigor Was: Re: From monads to monoids in a small category

2012-09-05 Thread Jay Sulzberger



On Tue, 4 Sep 2012, Alexander Solla alex.so...@gmail.com wrote:


On Tue, Sep 4, 2012 at 4:21 PM, Alexander Solla alex.so...@gmail.comwrote:




On Tue, Sep 4, 2012 at 3:39 AM, Alberto G. Corona agocor...@gmail.comwrote:


Monads are monoids in the category of endofunctors

This Monoid instance for the endofunctors of the set of all  elements
of (m a)   typematch in Haskell with FlexibleInstances:

instance Monad m = Monoid  (a - m a) where
   mappend = (=)   -- kleisly operator
   mempty  = return



The objects of a Kliesli category for a monad m aren't endofunctors.  You
want something like:

instance Monad m = Monoid (m a - m (m a)) where ...

/These/ are endofunctors, in virtue of join transforming an m (m a) into
an (m a).



Actually, even these aren't endofunctors, for a similar reason that :  you
really want something like

instance Monad m = Monoid (m a - m a) where
   mempty = id
   mappend = undefined -- exercise left to the reader

(i.e., you want to do plumbing through the Eilenberg-Moore category for a
monad, instead of the Kliesli category for a monad -- my last message
exposes the kind of plumping you want, but not the right types.)


This is not directly responsive to what has been written in this
thread, but is a beginner's, likely mistaken, complaint:

In many expositions of monads and other such Haskelliana, often
there is missing a few words of explanation in this style:

  A monad in category theory is a here is a pointer or an actual explanation/.

  To define the monad M in Haskell, we must pick out a category
  embedded in some sense in Haskell.  A category has objects and
  morphisms which must obey certain laws.  So for M we choose something/.

Now the something/ directly above might pick out something
which cannot be easily directly described inside the Haskell
world, or even in the first layers of the onion of
worlds-about-Haskell.  For example we might have:

  The objects of our category are types whose sets of values,
  always taken without _|_, are finite, or if infinite, come with
  a partial ordering which is isomorphic to a subset of N^2, with
  the usual product of N ordering.

Note I have no example in mind here, though certainly one can
make one up.  It is the style I am pointing out.

We then must have a paragraph defining, in a similar style, the
morphisms, the identity morphisms, and the composition of
morphisms of our category, which is embedded, by means outside
of Haskell, into Haskell.

Then we must have a paragraph defining the monad itself, so we
have our endofunctor T, as Wikipedia calls it in the article on
monads (category theory sense), and the two natural
transformations \eta and \mu.

And then, ah, hmunh, mmh, Oi! again I have failed to look at the
standard introductory literature.  The missing gentle words of
clarification, and thus, encouragement, are at

  http://www.haskell.org/haskellwiki/Hask

and the two notes there pointed to.

oo--JS.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Build regressions due to GHC 7.6

2012-08-31 Thread Jay Sulzberger



On Fri, 31 Aug 2012, Alexander Kjeldaas alexander.kjeld...@gmail.com wrote:


I think you're making this way harder than it really is.

What 99% of people need is that hackage packages builds with the latest
haskell platform, and/or with bleeding edge ghc, and with the latest
versions of its dependencies.

Thus for every dependency there is only one possible version - the latest
one, and there are only a couple of compilers.

Having special interest groups for ghc 6.12 support and old versions of
text is fine, but I think it is a pretty uninteresting problem to solve.

Likewise, supporting/fixing packages where the author for some reason
*requires* use of a non-current version of some other package is also quite
uninteresting (or at least outside the scope of my needs).   Such a package
is basically just a relic.

Alexander


If that is the set of constraints you must meet, namely that for
every library you wish to use, the same platform is specified as
the only platform you want, yes, I agree.  I am also sympathetic
to imposing such tight management of the repository.  (This tight
management could consist of just well checked annotations as to
what works with what.)  But there are other cases, for example,
testing various combinations of libraries for speed and memory
use.  And such a flexible tool will be of use when the Move To
One Platform comes.  Being able to pick up code and modify code,
and to write code to do this, is part of the tradition today
called functional programming.  The Cut Over would be done with
a Big Haskell Program that would test/re-write/test etc. until
every line of code was updated and a database of interactions
produced and made available in convenient form.

oo--JS.




On 30 August 2012 22:26, Jay Sulzberger j...@panix.com wrote:




On Thu, 30 Aug 2012, Alexander Kjeldaas alexander.kjeld...@gmail.com
wrote:

 This is very unfortunate, but this is crucially a tooling issue.  I am

going to wave my hands, but..

Ignore the mapreduce in the following video, but look at the use of clang
to do automatic refactoring of C++.  This is *incredibly* powerful in
dealing with updates to APIs.

http://www.llvm.org/devmtg/**2011-11/videos/Carruth_**
ClangMapReduce-desktop.mp4http://www.llvm.org/devmtg/2011-11/videos/Carruth_ClangMapReduce-desktop.mp4

But without all that fancy tech, *just* having all of Hackage source code
in one repository and using perl/regexps, fixing these types of issues is
O(1) instead of O(n).

All of the issues you mention seems to be fixable by a few lines of perl
*if we had the repository*.



Better to do this with sexps.

ad repositories: Part of the general problem of managing a
repository is close to the problem of inferring a good type for
(the value of) an expression.  The style of constraints is
similar.  Now the design problem is:

1. Arrange a general system for the specification of the
   constraints.

2. Design a systematic way of giving both advice and direct
   commands to the system.  This subsystem would be used to set
   up the default policy.

3. Choose a constraint solver.

Maybe worth looking at:

  
http://en.wikipedia.org/wiki/**Nix_package_managerhttp://en.wikipedia.org/wiki/Nix_package_manager
  [page was last modified on 17 July 2012 at 20:20]

oo--JS.




[a few hours later]

Actually, I went and downloaded all of hackage, put it into a git
repository and fixed these issues:

Fix catch
perl -ni -e 'print unless /import Prelude hiding \(catch\)/' $(git grep
'import Prelude hiding (catch)')

Fix CInt constructors (lots of other stuff from Foreign.C.Types not fixed
though)
perl -p -i -e 's/^import Foreign.C.Types(.*)CInt([^(])/**import
Foreign.C.Types${1}CInt(..)${**1}/g' $(git grep -l '^import.*CInt')

Fix bytestring versioning
perl -p -i -e 's/bytestring( +)=([0-9. ]+)([
]*)0.10/bytestring$1=$2${3}**0.11/g' $(git grep 'bytestring.* *0\.')

Patch to hackage:
http://ge.tt/6Cb5ErM/v/0

I understand that this doesn't help anyone, but if there was a way fix,
upload, and get *consensus* on a few regexps like this, then doing API
changes wouldn't be such a headache.

Alexander

On 30 August 2012 07:26, Bryan O'Sullivan b...@serpentine.com wrote:

 Since the release of the GHC 7.6 RC, I've been going through my packages

and fixing up build problems so that people who upgrade to 7.6 will have
a
smooth ride.

Sad to say, my experience of 7.6 is that it has felt like a particularly
rough release for backwards incompatibility. I wanted to quantify the
pain,
so I did some research, and here's what I found.

I maintain 25 open source Haskell packages. Of these, the majority have
needed updates due to the GHC 7.6 release:

   - base16-bytestring
   - blaze-textual
   - bloomfilter
   - configurator
   - criterion
   - double-conversion
   - filemanip
   - HDBC-mysql
   - mwc-random
   - pcap
   - pool
   - riak-haskell-client
   - snappy
   - text
   - text-format
   - text-icu


That's 16 out of 25 packages I've had to update. I've also either

Re: [Haskell-cafe] Build regressions due to GHC 7.6

2012-08-30 Thread Jay Sulzberger



On Thu, 30 Aug 2012, Alexander Kjeldaas alexander.kjeld...@gmail.com wrote:


This is very unfortunate, but this is crucially a tooling issue.  I am
going to wave my hands, but..

Ignore the mapreduce in the following video, but look at the use of clang
to do automatic refactoring of C++.  This is *incredibly* powerful in
dealing with updates to APIs.

http://www.llvm.org/devmtg/2011-11/videos/Carruth_ClangMapReduce-desktop.mp4

But without all that fancy tech, *just* having all of Hackage source code
in one repository and using perl/regexps, fixing these types of issues is
O(1) instead of O(n).

All of the issues you mention seems to be fixable by a few lines of perl
*if we had the repository*.


Better to do this with sexps.

ad repositories: Part of the general problem of managing a
repository is close to the problem of inferring a good type for
(the value of) an expression.  The style of constraints is
similar.  Now the design problem is:

1. Arrange a general system for the specification of the
   constraints.

2. Design a systematic way of giving both advice and direct
   commands to the system.  This subsystem would be used to set
   up the default policy.

3. Choose a constraint solver.

Maybe worth looking at:

  http://en.wikipedia.org/wiki/Nix_package_manager
  [page was last modified on 17 July 2012 at 20:20]

oo--JS.




[a few hours later]

Actually, I went and downloaded all of hackage, put it into a git
repository and fixed these issues:

Fix catch
perl -ni -e 'print unless /import Prelude hiding \(catch\)/' $(git grep
'import Prelude hiding (catch)')

Fix CInt constructors (lots of other stuff from Foreign.C.Types not fixed
though)
perl -p -i -e 's/^import Foreign.C.Types(.*)CInt([^(])/import
Foreign.C.Types${1}CInt(..)${1}/g' $(git grep -l '^import.*CInt')

Fix bytestring versioning
perl -p -i -e 's/bytestring( +)=([0-9. ]+)([
]*)0.10/bytestring$1=$2${3}0.11/g' $(git grep 'bytestring.* *0\.')

Patch to hackage:
http://ge.tt/6Cb5ErM/v/0

I understand that this doesn't help anyone, but if there was a way fix,
upload, and get *consensus* on a few regexps like this, then doing API
changes wouldn't be such a headache.

Alexander

On 30 August 2012 07:26, Bryan O'Sullivan b...@serpentine.com wrote:


Since the release of the GHC 7.6 RC, I've been going through my packages
and fixing up build problems so that people who upgrade to 7.6 will have a
smooth ride.

Sad to say, my experience of 7.6 is that it has felt like a particularly
rough release for backwards incompatibility. I wanted to quantify the pain,
so I did some research, and here's what I found.

I maintain 25 open source Haskell packages. Of these, the majority have
needed updates due to the GHC 7.6 release:

   - base16-bytestring
   - blaze-textual
   - bloomfilter
   - configurator
   - criterion
   - double-conversion
   - filemanip
   - HDBC-mysql
   - mwc-random
   - pcap
   - pool
   - riak-haskell-client
   - snappy
   - text
   - text-format
   - text-icu

That's 16 out of 25 packages I've had to update. I've also either reported
bugs on, or had to fix, several other people's packages along the way
(maybe four?). So let's say I've run into problems with 20 out of the
combined 29 packages of mine and my upstreams.

The reasons for these problems fall into three bins:

   - Prelude no longer exports catch, so a lot of import Prelude hiding
   (catch) had to change.
   - The FFI now requires constructors to be visible, so CInt has to be
   imported as CInt(..).
   - bytestring finally got bumped to 0.10, so many upper bounds had to
   be relaxed (*cf* my suggestion that the upper-bounds-by-default policy
   is destructive).

It has been a lot of work to test 29 packages, and then modify, rebuild,
and release 20 of them. It has consumed most of my limited free time for
almost two weeks. Worse, this has felt like make-work, of no practical
benefit to anyone beyond scrambling to restore the status quo ante.

If over half of my packages needed fixing, I'm alarmed at the thought of
the effects on the rest of Hackage.

I'm torn over this. I understand and agree with the impetus to improve the
platform by tidying things up, and yet just two seemingly innocuous changes
(catch and FFI) have forced me to do a bunch of running to stand still.

I don't have any suggestions about what to do; I know that it's hard to
estimate the downstream effects of what look like small changes. And so I'm
not exactly complaining. Call this an unhappy data point.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe






___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] formal semantics

2012-08-25 Thread Jay Sulzberger



On Sat, 25 Aug 2012, Kristopher Micinski krismicin...@gmail.com wrote:


On Thu, Aug 23, 2012 at 12:23 PM, Ramana Kumar
ramana.ku...@cl.cam.ac.uk wrote:

Dear Haskell Cafe

I'm looking for information on past and current attempts to write semantics
for Haskell.
Features I'm particularly interested in are:

formal
mechanised
maintainable
up to date

Of course, if nothing like that exists then partial attempts towards it
could still be useful.

My ultimate aims include:

Make it viable to define Haskell formally (i.e. so mechanised semantics can
take over the normative role of the Haskell reports).
Write a verified (or verify an existing) Haskell compiler (where verified
means semantics preserving).

Cheers,
Ramana



Ramana,

If you look through the Haskell reports, you'll see that the language
is typically explained by its desugaring to a core language which
has the semantics you'd expect, in the sense that it's a call by
need abstract machine implemented by means of graph reduction in form
of the STG machine.

Thus, you typically want to think about the semantics of core
Haskell, in which you might try understanding the semantics of the
STG machine.

You can certainly look at the classic article [1] that describes the
behavior, at a high level.  You might ask whether the high level
description of the STG machine really makes sense, at which point
I'd direct you to a number of other articles (the one that sticks in
my memory, but I haven't really read deeply, is [2]).

It sounds, however, that you are looking for a more full description
of the language's semantics in a formal manner, going from real
Haskell to core Haskell, I feel such a reduction must surely exist but
I'm not sure I can recall one.

If you were to write a verified compiler, you would need a semantics
for the STG machine and show that it obeyed the rules you'd expect (a
call by name semantics), and then compose your proof for that with
your reduction of real Haskell to core Haskell..

kris

[1] Implementing lazy functional languages on stock hardware: the
Spineless Tagless G-machine, Simon L. Peyton Jones,
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.53.3729
[2] The Spineless Tagless G-machine, naturally, Jon Mountjoy,
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.13.8726


I do not know Haskell.  It looks to me as though there are
several pieces of the mechanism:

1. There is, once the extensions are specified, a particular Type
System, that is, a formal system with, on the syntactic side, at
least, assumptions, judgements, rules of inference, terms lying
in some lambda calculus, etc..

2. The Type Inference Subsystem, which using some constraint
solver calculates the type to be assigned to the value of Haskell
expressions.

3. The machine which does reduction, perhaps execution, on
the value of Haskell expressions.  This is, by your account, the
STG machine.

There is a textual version of Haskell's Core.  If it were
executable and the runtime were solid and very simple and clear
in its design, I think we would have something close to a formal
semantics.  We'd also require that the translation to STG code
be very simple.

I think I have now mostly just repeated what you said.

oo--JS.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] formal semantics

2012-08-25 Thread Jay Sulzberger



On Sat, 25 Aug 2012, Kristopher Micinski krismicin...@gmail.com wrote:



I do not know Haskell.  It looks to me as though there are
several pieces of the mechanism:

1. There is, once the extensions are specified, a particular Type
System, that is, a formal system with, on the syntactic side, at
least, assumptions, judgements, rules of inference, terms lying
in some lambda calculus, etc..



That's right.  Extensions get complex too, however, and can't be
necessarily easily dismissed (not to imply you were doing so),
RankNTypes, for example,


I suspected that some of these extensions might cause Haskell
expressions to be hard to type.  One thing I am not clear on is
whether any standard extensions require modifications to
(internal) Core.




2. The Type Inference Subsystem, which using some constraint
solver calculates the type to be assigned to the value of Haskell
expressions.



Yes, that's right, for core haskell this is typically damas milner
(let bound) polymorphism


Ah, yes, thank you.  I almost believe in Hindley-Damas-Milner but
have not yet attempted the standard initiation course.




3. The machine which does reduction, perhaps execution, on
the value of Haskell expressions.  This is, by your account, the
STG machine.



Yes, notably graph reduction allows sharing, which is an important
part of Haskell's semantics,


Ah, thanks!




There is a textual version of Haskell's Core.  If it were
executable and the runtime were solid and very simple and clear
in its design, I think we would have something close to a formal
semantics.  We'd also require that the translation to STG code
be very simple.



Yes, that's right.  The translation to STG (or something like it,
another core language) can be found in many books and articles,


This is good.  I will look at the references given in this
thread.  The account at

  
http://web.archive.org/web/20060206074101/http://www.cse.ogi.edu/~mpj/thih/TypingHaskellInHaskell.html

is, I think, one part of what I was looking for.



But others here have also specified some good references for
executable versions of Core.  Still unsure if the translation from
Haskell to Core has been verified, I would suspect not, as I haven't
heard of any such thing.

kris


This part of the project I am less interested in, due to my fear,
I am an old Lisper, that the luxuriant syntactic undergrowth
might be hard to hack through.  If we had an executable Core,
which might have to be extended (after perhaps some subtraction)
with a simple notation for type annotations, and the rest of the
apparatus, I think this would be very useful.  Even though we
would not gain one of the claimed advantages of
rigor-all-the-way, that is, better bug suppression for ordinary
Haskell as she is spoke.

oo--JS.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] formal semantics

2012-08-25 Thread Jay Sulzberger



On Sat, 25 Aug 2012, Kristopher Micinski krismicin...@gmail.com wrote:


On Sat, Aug 25, 2012 at 3:38 PM, Jay Sulzberger j...@panix.com wrote:

This is good.  I will look at the references given in this
thread.  The account at


http://web.archive.org/web/20060206074101/http://www.cse.ogi.edu/~mpj/thih/TypingHaskellInHaskell.html

is, I think, one part of what I was looking for.



The book I recommend (although now I feel like a bad person because I
haven't read all of it :-(.., is The Implementation of Functional
Languages,

http://research.microsoft.com/en-us/um/people/simonpj/papers/slpj-book-1987/index.htm

I remember finding it quite approachable, although for the immediate
need of trying to implement *Haskell* in a verified environment, it
might not be immediately helpful, it's really good background reading
on the subject that will be imperative should you want to do such
things (and written around the time when Haskell was congealing, so
should be representative-ish of the attitudes underlying Haskell's
design at the time: graph reduction, compiling pattern matching,
translating high level lamda languages to core semantics).

kris


I have read the book to page 12.  So far I am swimming in waters
known to me.  I think you are right: It looks to be slow and
careful, and thus just right for me.

oo--JS.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell master thesis project

2012-08-20 Thread Jay Sulzberger



On Mon, 20 Aug 2012, Francesco Mazzoli f...@mazzo.li wrote:


Hi list(s),

I've been hooked on Haskell for a while now (some of you might know me as
bitonic on #haskell), and I find myself to decide on a project for my masters
thesis.

Inspired by the David Terei's master thesis (he wrote the LLVM backend), I was
wondering if there were any projects requiring similar effort that will benefit
the Haskell community.

--
Francesco * Often in error, never in doubt


The map from Source Code to Executable is one of the Great
Functors of Programming.  Lisp has the advantage that this
functor is visible and the objects and maps of the domain
category Source Code are easy to pick up and modify.

I think ghc may be instructed to output a textual representation
of the Core code produced on the way to the executable from
Haskell source code.  But this representation is, in part,
inadequate:

1. The representation is not faithful.  Thus, for example, we
cannot feed the textual representation of Core into the next part
of the Haskell compiler pipeline and get the same executable
we would get by running ghc on the Haskell source code.

2. The textual syntax is not sufficiently regular, so that maps
in the Source Code category, Core Variant, are not as easy to
code as they might be.

Of course, I write Lisp, so if you improved the Core side-pipe I
could easily continue to write sexps, and then have them
transformed to a Haskell executable.

oo--JS.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 3 level hierarchy of Haskell objects

2012-08-18 Thread Jay Sulzberger

When I was last on the best rooftop in the Mid Upper West Side of
Manhattan I was told of the work on logical relations.  I did not
know of this three decades old line of work.  I have grabbed

http://homepages.inf.ed.ac.uk/gdp/publications/Par_Poly.pdf

To me, the style is comfortable and the matter feels solid.
That is, the authors and I know some of the same kennings.

Thanks Lispers!  Thanks Haskellers!

oo--JS.


On Thu, 16 Aug 2012, Jay Sulzberger wrote:




On Wed, 15 Aug 2012, wren ng thornton w...@freegeek.org wrote:


On 8/13/12 5:42 PM, Jay Sulzberger wrote:

One difficulty which must impede many who study this stuff is
that just getting off the ground seems to require a large number
of definitions of objects of logically different kinds. (By
logic I mean real logic, not any particular formalized system.)
We must have expressions, values, type expressions, rules of
transformation at the various levels, the workings of the
type/kind/context inference system, etc., to get started.
Seemingly Basic and Scheme require less, though I certainly
mention expressions and values and types and
objects-in-the-Lisp-world in my Standard Rant on^W^WIntroduction
to Scheme.


Indeed, starting with Haskell's type system is jumping in at the deep end. 
And there isn't a very good tutorial on how to get started learning type 
theory. Everyone I know seems to have done the stumble around until it 
clicks routine--- including the folks whose stumbling was guided by formal 
study in programming language theory.


However, a good place to start ---especially viz a vis Scheme/Lisp--- is to 
go back to the beginning: the simply-typed lambda-calculus[1]. STLC has far 
fewer moving parts. You have type expressions, term expressions, term 
reduction, and that's it.


Yes.  The simply-typed lambda-calculus presents as a different
sort of thing from the untyped lambda calculus, and the many
complexly typed calculi.

I'd add the list of components of the base machine of STLC these
things:

1. The model theory, which is close to the model theory of the
  Lower Predicate Calculus.

2. The explication of execution of a program, which is more
  subtle than anything right at the beginning of the study of
  the Lower Predicate Calculus.  It certainly requires a score
  of definitions to lay it out clearly.

But, to say again, yes the STLC can, like linear algebra 101, be
presented in this way: The machine stands alone in bright
sunlight.  There are no shadows.  Every part can be seen plainly.
The eye sees all and is satisfied.

ad 2: It would be worthwhile to have an introduction to STLC
which compares STLC's explication of execution of a program
with other standard explications, such as these:

1. the often not explicitly presented explication that appears in
  textbooks on assembly and introductory texts on computer hardware

2. the usually more explicitly presented explication that appears
  in texts on Basic or Fortran

3. the often explicit explication that appears in texts on Snobol

4. various explications of what a database management system does

5. explications of how various Lisp variants work

6. explications of how Prolog works

7. explications of how general constraint solvers work, including
  proof finders



Other lambda calculi add all manner of bells and whistles, but STLC is the 
core of what lambda calculus and type systems are all about. So you should 
be familiar with it as a touchstone. After getting a handle on STLC, then 
it's good to see the Barendregt cube. Don't worry too much about 
understanding it yet, just think of it as a helpful map of a few of the 
major landmarks in type theory. It's an incomplete map to be sure. One 
major landmark that's suspiciously missing lays about halfway between STLC 
and System F: that's Hindley--Milner--Damas, or ML-style, lambda 
calculus.[2]


8. explication of how Hindley--Milner--Damas works



After seeing the Barendregt cube, then you can start exploring in those 
various directions. Notably, you don't need to think about the kind level 
until you start heading towards LF, MLTT, System Fw, or CC, since those are 
were you get functions/reduction at the type level and/or multiple sorts at 
the type level.


Haskell98 (and the official Haskell2010) take Hindley--Milner--Damas as the 
starting point and then add some nice things like algebraic data types and 
type classes (neither of which are represented on the Barendregt cube). 
This theory is still relatively simple and easy to understand, albeit in a 
somewhat ad-hoc manner.


Unexpectedly, to me, missing word in explications of algebraic
data types and pattern matching: magma.



Modern Haskell lives somewhere beyond the top plane of the cube. We have 
all of polymorphism (aka System F, aka second-order quantification; via 
-XRankNTypes), most of type operators (i.e., extending System F to System 
Fw; via type families etc), some dependent types (aka first-order 
quantification; via GADTs), plus things

Re: [Haskell-cafe] 3 level hierarchy of Haskell objects

2012-08-16 Thread Jay Sulzberger



On Wed, 15 Aug 2012, wren ng thornton w...@freegeek.org wrote:


On 8/13/12 5:42 PM, Jay Sulzberger wrote:

One difficulty which must impede many who study this stuff is
that just getting off the ground seems to require a large number
of definitions of objects of logically different kinds. (By
logic I mean real logic, not any particular formalized system.)
We must have expressions, values, type expressions, rules of
transformation at the various levels, the workings of the
type/kind/context inference system, etc., to get started.
Seemingly Basic and Scheme require less, though I certainly
mention expressions and values and types and
objects-in-the-Lisp-world in my Standard Rant on^W^WIntroduction
to Scheme.


Indeed, starting with Haskell's type system is jumping in at the deep end. 
And there isn't a very good tutorial on how to get started learning type 
theory. Everyone I know seems to have done the stumble around until it 
clicks routine--- including the folks whose stumbling was guided by formal 
study in programming language theory.


However, a good place to start ---especially viz a vis Scheme/Lisp--- is to 
go back to the beginning: the simply-typed lambda-calculus[1]. STLC has far 
fewer moving parts. You have type expressions, term expressions, term 
reduction, and that's it.


Yes.  The simply-typed lambda-calculus presents as a different
sort of thing from the untyped lambda calculus, and the many
complexly typed calculi.

I'd add the list of components of the base machine of STLC these
things:

1. The model theory, which is close to the model theory of the
   Lower Predicate Calculus.

2. The explication of execution of a program, which is more
   subtle than anything right at the beginning of the study of
   the Lower Predicate Calculus.  It certainly requires a score
   of definitions to lay it out clearly.

But, to say again, yes the STLC can, like linear algebra 101, be
presented in this way: The machine stands alone in bright
sunlight.  There are no shadows.  Every part can be seen plainly.
The eye sees all and is satisfied.

ad 2: It would be worthwhile to have an introduction to STLC
which compares STLC's explication of execution of a program
with other standard explications, such as these:

1. the often not explicitly presented explication that appears in
   textbooks on assembly and introductory texts on computer hardware

2. the usually more explicitly presented explication that appears
   in texts on Basic or Fortran

3. the often explicit explication that appears in texts on Snobol

4. various explications of what a database management system does

5. explications of how various Lisp variants work

6. explications of how Prolog works

7. explications of how general constraint solvers work, including
   proof finders



Other lambda calculi add all manner of bells and whistles, but STLC is the 
core of what lambda calculus and type systems are all about. So you should be 
familiar with it as a touchstone. After getting a handle on STLC, then it's 
good to see the Barendregt cube. Don't worry too much about understanding it 
yet, just think of it as a helpful map of a few of the major landmarks in 
type theory. It's an incomplete map to be sure. One major landmark that's 
suspiciously missing lays about halfway between STLC and System F: that's 
Hindley--Milner--Damas, or ML-style, lambda calculus.[2]


8. explication of how Hindley--Milner--Damas works



After seeing the Barendregt cube, then you can start exploring in those 
various directions. Notably, you don't need to think about the kind level 
until you start heading towards LF, MLTT, System Fw, or CC, since those are 
were you get functions/reduction at the type level and/or multiple sorts at 
the type level.


Haskell98 (and the official Haskell2010) take Hindley--Milner--Damas as the 
starting point and then add some nice things like algebraic data types and 
type classes (neither of which are represented on the Barendregt cube). This 
theory is still relatively simple and easy to understand, albeit in a 
somewhat ad-hoc manner.


Unexpectedly, to me, missing word in explications of algebraic
data types and pattern matching: magma.



Modern Haskell lives somewhere beyond the top plane of the cube. We have 
all of polymorphism (aka System F, aka second-order quantification; via 
-XRankNTypes), most of type operators (i.e., extending System F to System Fw; 
via type families etc), some dependent types (aka first-order quantification; 
via GADTs), plus things not represented on the cube (e.g., (co)inductive data 
types, type classes, etc). Trying to grok all of that at once without prior 
understanding of the pieces is daunting to be sure.


Yes.




[1] Via Curry--Howard, the pure STLC corresponds to natural deduction for the 
implicational fragment of intuitionistic propositional logic. Of course, you 
can add products (tuples), coproducts (Either), and absurdity to get natural 
deduction for the full intuitionistic

Re: [Haskell-cafe] Data structure containing elements which are instances of the same type class

2012-08-13 Thread Jay Sulzberger



On Mon, 13 Aug 2012, Johan Holmquist holmi...@gmail.com wrote:


That pattern looks so familiar. :) Existential types seem to fit in to the
type system really well so I never got why it is not part of the standard.
On Aug 12, 2012 10:36 AM, Daniel Trstenjak daniel.trsten...@gmail.com
wrote:


Does Haskell have a word for existential type declaration?  I
have the impression, and this must be wrong, that forall does
double duty, that is, it declares a for all in some sense like
the usual for all of the Lower Predicate Calculus, perhaps the
for all of system F, or something near it.

oo--JS.






Hi Oleg,

On Sat, Aug 11, 2012 at 08:14:47AM -, o...@okmij.org wrote:

I'd like to point out that the only operation we can do on the first
argument of MkFoo is to show to it. This is all we can ever do:
we have no idea of its type but we know we can show it and get a
String. Why not to apply show to start with (it won't be evaluated
until required anyway)?


It's only a test case. The real thing is for a game and will be
something like:

class EntityT e where
   update  :: e - e

   render  :: e - IO ()

   handleEvent :: e - Event - e

   getBound:: e - Maybe Bound


data Entity = forall e. (EntityT e) = Entity e

data Level = Level {
   entities = [Entity],
   ...
   }


Greetings,
Daniel

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 3 level hierarchy of Haskell objects

2012-08-13 Thread Jay Sulzberger



On Thu, 9 Aug 2012, wren ng thornton w...@freegeek.org wrote:


On 8/8/12 9:41 PM, Jay Sulzberger wrote:

Haskell's type classes look to me to be a provision for declaring
a signature in the sense of the above article.


Just to clarify this in the context of my previous post, type classes define 
signatures in two significantly different ways.


(1) The first way is as you suggest: the methods of a type class specify a 
signature, and for convenience we give that signature a name (i.e., the type 
class' name). However, this is a signature for the term level of Haskell 
(i.e., a signature in the Term sort not discussed previously; elements of 
Type classify elements of Term, just as elements of Kind classify elements of 
Type).


(2) The second way is that, at the type level, the collection of type class 
names together form a signature. Namely they form the signature comprising 
the majority of the Context sort.


Both senses are important for understanding the role of type classes in 
Haskell, but I believe that some of Patrick Browne's confusion is due to 
trying to conflate these into a single notion. Just as terms and types should 
not be confused, neither should methods and type classes. In both cases, each 
is defined in terms of the other, however they live in separate universes. 
This is true even in languages which allow terms to occur in type expressions 
and allow types to occur in term expressions. Terms denote values and 
computations (even if abstractly); whereas, types denote collections of 
expressions (proper types denote collections of term expressions; kinds 
denote collections of type expressions;...).


--
Live well,
~wren


Thanks, wren!

I am attempting to read the Haskell 2010 standard at

  http://www.haskell.org/onlinereport/haskell2010/

It is very helpful and much less obscure than I feared it would be.

What you say about the levels seems reasonable to me now, and I
begin dimly to see an outline of a translation to non-New Type
Theory stuff, which may help me to enter the World of New Type
Theory.

One difficulty which must impede many who study this stuff is
that just getting off the ground seems to require a large number
of definitions of objects of logically different kinds.  (By
logic I mean real logic, not any particular formalized system.)
We must have expressions, values, type expressions, rules of
transformation at the various levels, the workings of the
type/kind/context inference system, etc., to get started.
Seemingly Basic and Scheme require less, though I certainly
mention expressions and values and types and
objects-in-the-Lisp-world in my Standard Rant on^W^WIntroduction
to Scheme.

oo--JS.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data structure containing elements which are instances of the same type class

2012-08-13 Thread Jay Sulzberger



On Mon, 13 Aug 2012, Ryan Ingram ryani.s...@gmail.com wrote:


On Mon, Aug 13, 2012 at 12:30 PM, Jay Sulzberger j...@panix.com wrote:


Does Haskell have a word for existential type declaration?  I
have the impression, and this must be wrong, that forall does
double duty, that is, it declares a for all in some sense like
the usual for all of the Lower Predicate Calculus, perhaps the
for all of system F, or something near it.



It's using the forall/exists duality:
   exsts a. P(a)  =  forall r. (forall a. P(a) - r) - r


;)

This is, I think, a good joke.  It has taken me a while, but I
now understand that one of the most attractive things about
Haskell is its sense of humor, which is unfailing.

I will try to think about this, after trying your examples.

I did suspect that, in some sense, constraints in combination
with forall could give the quantifier exists.

Thanks, ryan!

oo--JS.




For example:
   (exists a. Show a = a) = forall r. (forall a. Show a = a - r) - r

This also works when you look at the type of a constructor:

   data Showable = forall a. Show a = MkShowable a
   -- MkShowable :: forall a. Show a = a - Showable

   caseShowable :: forall r. (forall a. Show a = a - r) - Showable - r
   caseShowable k (MkShowable x) = k x

   testData :: Showable
   testData = MkShowable (1 :: Int)

   useData :: Int
   useData = case testData of (MkShowable x) - length (show x)

   useData2 :: Int
   useData2 = caseShowable (length . show) testData

Haskell doesn't currently have first class existentials; you need to wrap
them in an existential type like this.  Using 'case' to pattern match on
the constructor unwraps the existential and makes any packed-up constraints
available to the right-hand-side of the case.

An example of existentials without using constraints directly:

   data Stream a = forall s. MkStream s (s - Maybe (a,s))

   viewStream :: Stream a - Maybe (a, Stream a)
   viewStream (MkStream s k) = case k s of
   Nothing - Nothing
   Just (a, s') - Just (a, MkStream s' k)

   takeStream :: Int - Stream a - [a]
   takeStream 0 _ = []
   takeStream n s = case viewStream s of
   Nothing - []
   Just (a, s') - a : takeStream (n-1) s'

   fibStream :: Stream Integer
   fibStream = Stream (0,1) (\(x,y) - Just (x, (y, x+y)))

Here the 'constraint' made visible by pattern matching on MkStream (in
viewStream) is that the s type stored in the stream matches the s type
taken and returned by the 'get next element' function.  This allows the
construction of another stream with the same function but a new state --
MkStream s' k.

It also allows the stream function in fibStream to be non-recursive which
greatly aids the GHC optimizer in certain situations.

 -- ryan



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data structure containing elements which are instances of the same type class

2012-08-13 Thread Jay Sulzberger



On Mon, 13 Aug 2012, Alexander Solla alex.so...@gmail.com wrote:


On Mon, Aug 13, 2012 at 6:25 PM, Jay Sulzberger j...@panix.com wrote:




On Mon, 13 Aug 2012, Ryan Ingram ryani.s...@gmail.com wrote:

 On Mon, Aug 13, 2012 at 12:30 PM, Jay Sulzberger j...@panix.com wrote:


 Does Haskell have a word for existential type declaration?  I

have the impression, and this must be wrong, that forall does
double duty, that is, it declares a for all in some sense like
the usual for all of the Lower Predicate Calculus, perhaps the
for all of system F, or something near it.



It's using the forall/exists duality:
   exsts a. P(a)  =  forall r. (forall a. P(a) - r) - r



;)

This is, I think, a good joke.  It has taken me a while, but I
now understand that one of the most attractive things about
Haskell is its sense of humor, which is unfailing.

I will try to think about this, after trying your examples.

I did suspect that, in some sense, constraints in combination
with forall could give the quantifier exists.



In a classical logic, the duality is expressed by !E! = A, and !A! = E,
where E and A are backwards/upsidedown and ! represents negation.  In
particular, for a proposition P,

Ex Px = !Ax! Px (not all x's are not P)
and
Ax Px = !Ex! Px (there does not exist an x which is not P)


Yes.



Negation is relatively tricky to represent in a constructive logic -- hence
the use of functions/implications and bottoms/contradictions.  The type
ConcreteType - b represents the negation of ConcreteType, because it shows
that ConcreteType implies the contradiction.


I am becoming sensitized to this distinction.  I now, I think,
feel the impulse toward constructivism, that is, the
assumption/delusion^Waspiration that all functions from the reals
to the reals are continuous.  One argument that helped me goes:

 The reals between 0 and 1 are functions from the integers to say {0, 1}.

 They are attained as limits of functions f: iota(n) - {0, 1}, as
 n becomes larger and larger and ... , where iota(n) is a set with
 n elements, n a finite integer.

 So, our objects, the reals, are attained as limits.  And the
 process of proceeding toward the limit is natural, functorial
 in the sense of category theory.

 Thus so also our morphisms, that is functions from the reals to
 the reals, must be produced functorially as limits of maps
 between objects f: iota(n) - {0, 1}.



Put these ideas together, and you will recover the duality as expressed
earlier.


Thanks!  I am reading some blog posts and I was impressed by the
buffalo hair here:

  
http://existentialtype.wordpress.com/2012/08/11/extensionality-intensionality-and-brouwers-dictum/

oo--JS.










For example:
   (exists a. Show a = a) = forall r. (forall a. Show a = a - r) - r

This also works when you look at the type of a constructor:

   data Showable = forall a. Show a = MkShowable a
   -- MkShowable :: forall a. Show a = a - Showable

   caseShowable :: forall r. (forall a. Show a = a - r) - Showable - r
   caseShowable k (MkShowable x) = k x

   testData :: Showable
   testData = MkShowable (1 :: Int)

   useData :: Int
   useData = case testData of (MkShowable x) - length (show x)

   useData2 :: Int
   useData2 = caseShowable (length . show) testData

Haskell doesn't currently have first class existentials; you need to wrap
them in an existential type like this.  Using 'case' to pattern match on
the constructor unwraps the existential and makes any packed-up
constraints
available to the right-hand-side of the case.

An example of existentials without using constraints directly:

   data Stream a = forall s. MkStream s (s - Maybe (a,s))

   viewStream :: Stream a - Maybe (a, Stream a)
   viewStream (MkStream s k) = case k s of
   Nothing - Nothing
   Just (a, s') - Just (a, MkStream s' k)

   takeStream :: Int - Stream a - [a]
   takeStream 0 _ = []
   takeStream n s = case viewStream s of
   Nothing - []
   Just (a, s') - a : takeStream (n-1) s'

   fibStream :: Stream Integer
   fibStream = Stream (0,1) (\(x,y) - Just (x, (y, x+y)))

Here the 'constraint' made visible by pattern matching on MkStream (in
viewStream) is that the s type stored in the stream matches the s type
taken and returned by the 'get next element' function.  This allows the
construction of another stream with the same function but a new state --
MkStream s' k.

It also allows the stream function in fibStream to be non-recursive which
greatly aids the GHC optimizer in certain situations.

 -- ryan



__**_
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 3 level hierarchy of Haskell objects

2012-08-10 Thread Jay Sulzberger



On Thu, 9 Aug 2012, Patrick Browne patrick.bro...@dit.ie wrote:



On 09/08/12, Jay Sulzberger j...@panix.com wrote:


  Here we are close to the distinction between a class of objects
  which satisfy a condition vs objects with added structure,
  for
  which see:

  ? http://math.ucr.edu/home/baez/qg-spring2004/discussion.html
  ? http://ncatlab.org/nlab/show/stuff,+structure,+property

  oo--JS.

This seems to be addressing my my? question, but I am not sure that I can
relate the above ideas to Haskell.


I am, perhaps too gingerly, looking at Haskell.  I have not yet
attempted to read the standard.  But from a distance (and this
addresses in part the question Is it April Fool's Day? in
response to my mild suggestion that category theory might be
applied to the relation between code-making and deployment of a
production system), looking at discussions by people who use
Haskell, a large scale effort must be made just to write down
even the most primitive approximations to these surely existing
functors:

1. the various functors between various categories with objects
types and with objects values

2. the various functors between various categories with objects
programs in source code form, with objects compiled programs,
with objects running programs, and some categories that somehow
deal with behaviors

I am aware that I am ignorant of the literature which deals with
these things.


Below is my current (naive) understanding and some further question:

objects which satisfy a condition
Could these objects be models that have the same signature (instances in
Haskell).
Haskell type classes seem to be signature only (no equations, ignoring
default methods) so in general? they provide an empty theory with no logical
consequences.

objects with added structure
I am struggling with this concept both in general and in relation to the
hierarchy from my earlier posting.
Could this be model expansion where a theory describing an existing model
is enriched with additional axioms.
The enriched theory is then satisfied by models with more structure
(operations).
I am unsure about the size of this expanded model and the number of
potential expanded models.
Would a expanded model have less elements?
Would there be? fewer models for the enriched theory?

In relation to Haskell data types also have structure (constructors).
The data types can be used to build other data types (is this model
expansion?)
I am not sure if the model (instance) of a sub-class could be considered as
expanded model of its super-class.


I am not today enough up on the old Baez et al discussion.  I
hope to post something along this line when I know a bit more
Haskell, and after I have read again Baez et al.

For the record I now repeat what you already know.

I think good paradigm cases must be:

1. There is the category of groups.  A sub-category is the
   category of Abelian groups.

2. There is the category of rings.  The forgetful functor
   f: RO - R, where RO is the category of ordered rings and R is category
   of rings, and f just forgets the cone of non-negative elements.

1. must be a case of objects which satisfy a condition; 2. must
be a case of objects with added structure.




Your reply was very helpful
Thanks,
Pat


De nada and you are very welcome!

oo--JS.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 3 level hierarchy of Haskell objects

2012-08-08 Thread Jay Sulzberger



On Wed, 8 Aug 2012, Ertugrul S??ylemez e...@ertes.de wrote:


Patrick Browne patrick.bro...@dit.ie wrote:


Gast [1] describes a 3 level hierarchy of Haskell objects using
elementOf from set theory:

value?? *elementOf*?? type?? *elementOf*?? class


This hierarchy is pretty arbitrary and quickly runs into problems with
some type system extensions.  You can find out whether the barber of
Seville shaves himself.

A better hierarchial model is related to universes and uses type
relations (assuming a right-associative ':'):

   value : type : kind : sort : ...
   value : type : universe 0 : universe 1 : universe 2 : ...

A value is of a type.  A type is of the first universe (kind).  An n-th
universe is of the (n+1)-th universe.

Type classes can be modelled as implicit arguments.



If we include super-classes would the following be an appropriate
mathematical representation?


What is a superclass?  What are the semantics?


Greets,
Ertugrul


I know no Haskell, so my first reactions are likely to fail to grip.

There is a type theory from one generation, or perhaps two or
three, before our time's New Crazed Type Theory.  This is the
type theory of the Lower Predicate Calculus and of Universal
Algebra, style of Birkhoff's Theorem on Varieties.  An
introduction to this type theory is presented here:

  http://en.wikipedia.org/wiki/Signature_%28logic%29
  [page was last modified on 27 March 2011 at 16:54]

Haskell's type classes look to me to be a provision for declaring
a signature in the sense of the above article.  An instance of
type t which belongs to a type class tc is guaranteed to have
certain attendant structures, just as the underlying set of a
group is automatically equipped with attendant, indeed defining,
operations of 1, *, and ^-1.  1 is a zeroary operation with
codomain the set of group elements, * is a binary operation that
is, has type g x g - g, and ^-1 has type g - g, where g is the
type of group elements of our particular group.  That this
particular group is indeed an instance of the general concept
group requires that t be of a type class which guarantees the
attendant three group operations 1, *, and ^-1, with types as
shown.  Note that the usual definition of group has further
requirements.  These further requirements are called
associativity of *, 1 is an identity for *, and ^-1 is an
inverse for *.  I think that in Haskell today these requirements
must, in many cases, be handled by the programmer by hand, and
are not automatically handled by the type system of Haskell.
Though, as pointed out in an earlier post, in some cases one can
use certain constructions, constructions convenient in Haskell,
to guarantee that operations so defined meet the requirements.

Here we are close to the distinction between a class of objects
which satisfy a condition vs objects with added structure, for
which see:

  http://math.ucr.edu/home/baez/qg-spring2004/discussion.html
  http://ncatlab.org/nlab/show/stuff,+structure,+property

oo--JS.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Knight Capital debacle and software correctness

2012-08-04 Thread Jay Sulzberger



On Sat, 4 Aug 2012, Vasili I. Galchin vigalc...@gmail.com wrote:


Hello Haskell Group,

   I work in mainstream software industry.

   I am going to make an assumption  except for Jane Street
Capital all/most Wall Street software is written in an imperative
language.

   Assuming this why is Wall Street not awaken to the dangers. As I
write, Knight Capital may not survive the weekend.


Regards,

Vasili


I believe this particular mild error was in part due to a failure
to grasp and apply category theory.  There are several systems here:

1. The design of the code.

2. The coding of the code.

3. The testing of the code.

4. The live running of the code.

5. The watcher systems which watch the live running.

If the newspaper reports are to be believed, the watcher systems,
all of them, failed.  Or there was not even one watcher system
observing/correcting/halting at the time of running.

Category theory suggests that all of these systems are worthy of
study, and that these systems have inter-relations, which are
just as worthy of study.

oo--JS.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Knight Capital debacle and software correctness

2012-08-04 Thread Jay Sulzberger



On Sat, 4 Aug 2012, Jake McArthur jake.mcart...@gmail.com wrote:


I feel like this thread is kind of surreal. Knight Capital's mistake
was to use imperative programming styles? An entire industry is
suffering because they haven't universally applied category theory to
software engineering and live systems? Am I just a victim of a small
troll/joke?

- Jake


ad application of category theory: No joke.

Atul Gawande's book The Checklist Manifesto deals with some of
this:

  http://us.macmillan.com/thechecklistmanifesto/AtulGawande

In related news, for every type t of Haskell is it the case that
something called _|_ is an object of the type?

oo--JS.




On Sat, Aug 4, 2012 at 12:46 PM, Jay Sulzberger j...@panix.com wrote:



On Sat, 4 Aug 2012, Vasili I. Galchin vigalc...@gmail.com wrote:


Hello Haskell Group,

   I work in mainstream software industry.

   I am going to make an assumption  except for Jane Street
Capital all/most Wall Street software is written in an imperative
language.

   Assuming this why is Wall Street not awaken to the dangers. As I
write, Knight Capital may not survive the weekend.


Regards,

Vasili



I believe this particular mild error was in part due to a failure
to grasp and apply category theory.  There are several systems here:

1. The design of the code.

2. The coding of the code.

3. The testing of the code.

4. The live running of the code.

5. The watcher systems which watch the live running.

If the newspaper reports are to be believed, the watcher systems,
all of them, failed.  Or there was not even one watcher system
observing/correcting/halting at the time of running.

Category theory suggests that all of these systems are worthy of
study, and that these systems have inter-relations, which are
just as worthy of study.

oo--JS.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Knight Capital debacle and software correctness

2012-08-04 Thread Jay Sulzberger



On Sat, 4 Aug 2012, Clark Gaebel cgae...@uwaterloo.ca wrote:


Yes.


Thank you!

Further, if you want:

  Let us have two types s and t.  Let _|_^s be the_|_ for type s,
  and let _|_^t be the _|_ for type t.

  For which famous equivalences of the Haskell System are these two
  _|_ objects equivalent?

oo--JS.




On Sat, Aug 4, 2012 at 1:47 PM, Jay Sulzberger j...@panix.com wrote:




On Sat, 4 Aug 2012, Jake McArthur jake.mcart...@gmail.com wrote:

 I feel like this thread is kind of surreal. Knight Capital's mistake

was to use imperative programming styles? An entire industry is
suffering because they haven't universally applied category theory to
software engineering and live systems? Am I just a victim of a small
troll/joke?

- Jake



ad application of category theory: No joke.

Atul Gawande's book The Checklist Manifesto deals with some of
this:

  
http://us.macmillan.com/**thechecklistmanifesto/**AtulGawandehttp://us.macmillan.com/thechecklistmanifesto/AtulGawande

In related news, for every type t of Haskell is it the case that
something called _|_ is an object of the type?

oo--JS.





On Sat, Aug 4, 2012 at 12:46 PM, Jay Sulzberger j...@panix.com wrote:




On Sat, 4 Aug 2012, Vasili I. Galchin vigalc...@gmail.com wrote:

 Hello Haskell Group,


   I work in mainstream software industry.

   I am going to make an assumption  except for Jane Street
Capital all/most Wall Street software is written in an imperative
language.

   Assuming this why is Wall Street not awaken to the dangers. As I
write, Knight Capital may not survive the weekend.


Regards,

Vasili




I believe this particular mild error was in part due to a failure
to grasp and apply category theory.  There are several systems here:

1. The design of the code.

2. The coding of the code.

3. The testing of the code.

4. The live running of the code.

5. The watcher systems which watch the live running.

If the newspaper reports are to be believed, the watcher systems,
all of them, failed.  Or there was not even one watcher system
observing/correcting/halting at the time of running.

Category theory suggests that all of these systems are worthy of
study, and that these systems have inter-relations, which are
just as worthy of study.

oo--JS.


__**_
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe






__**_
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe






___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Knight Capital debacle and software correctness

2012-08-04 Thread Jay Sulzberger



On Sat, 4 Aug 2012, Clark Gaebel cgae...@uwaterloo.ca wrote:


As far as I know, you can't check equivalence of _|_. Since Haskell uses
_|_ to represent a nonterminating computation, this would be
synonymouswith solving the halting
problem.


Ah, thanks.  I will attempt to think about this.

oo--JS.




On Sat, Aug 4, 2012 at 2:04 PM, Jay Sulzberger j...@panix.com wrote:




On Sat, 4 Aug 2012, Clark Gaebel cgae...@uwaterloo.ca wrote:

 Yes.




Thank you!

Further, if you want:

  Let us have two types s and t.  Let _|_^s be the_|_ for type s,
  and let _|_^t be the _|_ for type t.

  For which famous equivalences of the Haskell System are these two
  _|_ objects equivalent?

oo--JS.




On Sat, Aug 4, 2012 at 1:47 PM, Jay Sulzberger j...@panix.com wrote:




On Sat, 4 Aug 2012, Jake McArthur jake.mcart...@gmail.com wrote:

 I feel like this thread is kind of surreal. Knight Capital's mistake


was to use imperative programming styles? An entire industry is
suffering because they haven't universally applied category theory to
software engineering and live systems? Am I just a victim of a small
troll/joke?

- Jake



ad application of category theory: No joke.

Atul Gawande's book The Checklist Manifesto deals with some of
this:

  
http://us.macmillan.com/thechecklistmanifesto/AtulGawandehttp://us.macmillan.com/**thechecklistmanifesto/**AtulGawande
http://us.**macmillan.com/**thechecklistmanifesto/**AtulGawandehttp://us.macmillan.com/thechecklistmanifesto/AtulGawande





In related news, for every type t of Haskell is it the case that
something called _|_ is an object of the type?

oo--JS.




 On Sat, Aug 4, 2012 at 12:46 PM, Jay Sulzberger j...@panix.com wrote:





On Sat, 4 Aug 2012, Vasili I. Galchin vigalc...@gmail.com wrote:

 Hello Haskell Group,



   I work in mainstream software industry.

   I am going to make an assumption  except for Jane Street
Capital all/most Wall Street software is written in an imperative
language.

   Assuming this why is Wall Street not awaken to the dangers. As I
write, Knight Capital may not survive the weekend.


Regards,

Vasili




I believe this particular mild error was in part due to a failure
to grasp and apply category theory.  There are several systems here:

1. The design of the code.

2. The coding of the code.

3. The testing of the code.

4. The live running of the code.

5. The watcher systems which watch the live running.

If the newspaper reports are to be believed, the watcher systems,
all of them, failed.  Or there was not even one watcher system
observing/correcting/halting at the time of running.

Category theory suggests that all of these systems are worthy of
study, and that these systems have inter-relations, which are
just as worthy of study.

oo--JS.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafehttp://www.haskell.org/**mailman/listinfo/haskell-cafe
**http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe








 ___

Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafehttp://www.haskell.org/**mailman/listinfo/haskell-cafe
**http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe









__**_
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe






___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reddy on Referential Transparency

2012-07-28 Thread Jay Sulzberger



On Sat, 28 Jul 2012, damodar kulkarni kdamodar2...@gmail.com wrote:


So a language is referentially transparent if replacing a sub-term with

another with the same denotation doesn't change the overall meaning?
But then isn't any language RT with a sufficiently cunning denotational
semantics?  Or even a dumb one that gives each term a distinct denotation.



That's neat ... I mean, by performing sufficiently complicated brain
gymnastics, one can do equational reasoning on C subroutines (functions!)
too.

So, there is no big difference between C and Haskell when it comes to
equational reasoning...


Regards,
Damodar


The general theory of equational reasoning is often applicable
because often we have a collection of sentences S of the form

  x = y

where x and y are terms, and often we have a set of models M,
where given a system of backround assumptions, we can ask of the
above sentence, call it s, whether in some particular model m, is
it the case that

  eval(x) = eval(y)

where now the sentence above is not a sentence of S, but rather a
fact, under assumptions, about the model m, namely that what the
term x means in m is equal to what the term y means in m.  Here
we have written what the term x refers to in m as eval(x).  For
example x might be (vector-ref part-of-backround 17) in some
Lisp source code, given as text, and y might
(vector-ref part-of-backround 116), and part-of-backround is
some vector which comes from, in general both the model m and,
this is a scare phrase, the backround assumptions.  Oi, please
forgive tangle here!

Note that the = in the sentence s is not the same = as in the
model m, and we should in a first text write them with different
symbols.  As terms x and y are not equal-as-terms.  It is only
their values in m that are equal-in-m.

(Obtuse Precisian Crank Further Remark: And the general theory of
equality often applies to whatever sort of object lies in the
vector part-of-backround at position 17, and also to whatever
lies at position 116.  THE ARGUMENT IS COMPLETE!  But see

 http://iml.univ-mrs.fr/~girard/TS2.pdf

for more and different.)


Personal remark: I remember the wonderful feeling of having a
weight lifted from me when I read this in Roger C. Lyndon's Notes on Logic:

  Oi, ah, I cannot find the quote tonight.

The quote was something like

  In this book we shall work at the level of precision which
  today is standard in works on the theory of groups, say.

by which Lyndon meant that certain extraordinarily finicky
considerations were, by the date of composition, standard, and
the reader might be trusted to handle these without error.

Of course, as a beginner coming to Haskell, and as an old Lisper

(my first Lisp was LISP 1.5, for which see
http://www.softwarepreservation.org/projects/LISP/book/LISP%201.5%20Programmers%20Manual.pdf)

I wish that Haskell made more patent to my un-practiced eye more
of such finicky details^Wbasics.  Naturally what I'd like is
Haskell source code presented in the usual parenthesized prefix
notation of Lisp, with of course, every term having an
immediately available complete explicit typing, which could
perhaps be presented when one clicks on the term (perhaps plists
might be part of the mechanism).

BRING OUT THE CORE!

oo--JS.





On Sat, Jul 28, 2012 at 1:47 AM, Alexander Solla alex.so...@gmail.comwrote:




On Fri, Jul 27, 2012 at 12:06 PM, Ross Paterson r...@soi.city.ac.ukwrote:


On Fri, Jul 27, 2012 at 07:19:40PM +0100, Chris Dornan wrote:

So a language is referentially transparent if replacing a sub-term

with another with the same

denotation doesn't change the overall meaning?


Isn't this just summarizing the distinguishing characteristic of a

denotational semantics?

Right, so where's the substance here?


My understanding is that RT is about how easy it is to carry out
_syntactical_ transformations of a program that preserve its meaning.
For example, if you can freely and naively inline a function definition
without having to worry too much about context then your PL is deemed
to possess lots of RT-goodness (according to FP propaganda anyway; note
you typically can't freely inline function definitions in a procedural
programming language because the actual arguments to the function may
involve dastardly side effects; even with a strict function-calling
semantics divergence will complicate matters).


Ah, but we only think that because of our blinkered world-view.

Another way of looking at it is that the denotational semanticists have
created a beautiful language to express the meanings of all those ugly
languages, and we're programming in it.



A third way to look at it is that mathematicians, philosophers, and
logicians invented the semantics denotational semanticists have borrowed,
specifically because of the properties derived from the philosophical
commitments they made.  Computer science has habit of taking ideas from
other fields and merely renaming them.  Denotational semantics is known
as model 

Re: [Haskell-cafe] Interest in typed relational algebra library?

2012-07-18 Thread Jay Sulzberger



On Tue, 10 Jul 2012, o...@okmij.org wrote:




And yes to first order predicate calculus too!


Just two weeks ago Chung-chieh Shan and I were explaining at NASSLLI
the embedding in Haskell of the higher-order predicate logic with two
base types (so-called Ty2). The embedding supports type-safe
simplification of formulas (which was really needed for our
applications). The embedding is extensible: you can add models and
more constants.

http://okmij.org/ftp/gengo/NASSLLI10/course.html#semantics


Oleg, thank you.  Which article should I read first?

oo--JS.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Interest in typed relational algebra library?

2012-07-18 Thread Jay Sulzberger



On Wed, 18 Jul 2012, Jay Sulzberger wrote:




On Tue, 10 Jul 2012, o...@okmij.org wrote:




And yes to first order predicate calculus too!


Just two weeks ago Chung-chieh Shan and I were explaining at NASSLLI
the embedding in Haskell of the higher-order predicate logic with two
base types (so-called Ty2). The embedding supports type-safe
simplification of formulas (which was really needed for our
applications). The embedding is extensible: you can add models and
more constants.

http://okmij.org/ftp/gengo/NASSLLI10/course.html#semantics


Oleg, thank you.  Which article should I read first?

oo--JS.


Ah, I see that the files are Haskell source files, and not Postscript files.

The order is clear.

oo--JS.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Interest in typed relational algebra library?

2012-07-07 Thread Jay Sulzberger



On Sat, 7 Jul 2012, Paul Visschers m...@paulvisschers.net wrote:


Hello,

I've been out of the Haskell game for a bit, but now I'm back. A couple of
years ago I made a small library that implements relational algebra with
types so that malformed queries and other operations are caught at compile
time. It is heavily based off of the internals of HaskellDB (see
http://hackage.haskell.org/packages/archive/haskelldb/2.1.1/doc/html/Database-HaskellDB-PrimQuery.html),
but types so that it can actually be used directly instead of having to use
HaskellDB's query monad. Besides the joy of using relational algebra
directly in your code, this also means that you can make query-optimizing
code in a type-safe way, you can subquery results returned by the database
directly without accessing the database again and you have more options
when converting from relation algebra to SQL or another query language. The
library isn't quite ready for release, but I might want to work on it a bit
and then release it. Is anyone interested in such a library?

Paul Visschers


Yes!

And yes to first order predicate calculus too!

oo--JS.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe