[Haskell-cafe] Multi-param typeclass vs locally constrained typeclass methods

2013-09-18 Thread Jacques Carette
Could someone please explain what the difference (if any!), in semantics 
is between


class Foo f = Bar f g where
  method1 :: f a - g a

and

class Bar' g where
  method2 :: Foo f = f a - g a

?  Maybe the translation of the above to something lower level might 
help.  [Note: f a - g a is just an example, and is not meaningful to 
the question].


The best answer would contain a design guideline for typeclasses, which 
would spell out conditions in which to use each variant.


Jacques


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Multi-param typeclass vs locally constrained typeclass methods

2013-09-18 Thread Jacques Carette

On 13-09-18 08:54 AM, Roman Cheplyaka wrote:

* Jacques Carette care...@mcmaster.ca [2013-09-18 08:21:51-0400]

Could someone please explain what the difference (if any!), in
semantics is between

class Foo f = Bar f g where
   method1 :: f a - g a

and

class Bar' g where
   method2 :: Foo f = f a - g a

Bar is more flexible than Bar'. If you have n types, you can write n^2
Bar instances (potentially having very different semantics) to convert
between them.

You can only write n Bar' instances, on the other hand. In these
instances you can dispatch based on 'g' but not on 'f'. 'f' is abstract,
and you only can access it through the Foo interface.

So they are quite different.


Right, that makes sense, thanks.

Turns out that, in my case, I frequently want the Bar' case, as I want 
things to be fully polymorphic in Foo.


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Strange cabal failure

2013-06-18 Thread Jacques Carette
In trying to install the lens package, it eventually tries to install 
transformers-compat-0.1.1.1 which in turn depends on 
transformers-0.3.0.0 -- however that asksk for

transformers-0.3.0.0-3006d6ea13a2c10770bffd4de7a96dc9
which 1) is weird, and 2) doesn't exist!'

What gives?

Jacques

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Workshop on Generic Programming, Second CFP

2013-06-10 Thread Jacques Carette

==
CALL FOR PAPERS

   WGP 2013

9th ACM SIGPLAN Workshop on Generic Programming
  Boston, Massachusetts, USA
  Saturday, September 29th, 2013

http://www.wgp-sigplan.org/2013

  Co-located with the
International Conference on Functional Programming (ICFP 2013)
==


Goals of the workshop
-

Generic programming is about making programs more adaptable by making
them more general. Generic programs often embody non-traditional kinds
of polymorphism; ordinary programs are obtained from them by suitably
instantiating their parameters. In contrast with normal programs, the
parameters of a generic program are often quite rich in structure; for
example they may be other programs, types or type constructors, class
hierarchies, or even programming paradigms.

Generic programming techniques have always been of interest, both to
practitioners and to theoreticians, and, for at least 20 years,
generic programming techniques have been a specific focus of research
in the functional and object-oriented programming communities. Generic
programming has gradually spread to more and more mainstream
languages, and today is widely used in industry. This workshop brings
together leading researchers and practitioners in generic programming
from around the world, and features papers capturing the state of the
art in this important area.

We welcome contributions on all aspects, theoretical as well as
practical, of

* generic programming,
* programming with (C++) concepts,
* meta-programming,
* programming with type classes,
* programming with modules,
* programming with dependent types,
* type systems for generic programming,
* polytypic programming,
* adaptive object-oriented programming,
* component-based programming,
* strategic programming,
* aspect-oriented programming,
* family polymorphism,
* object-oriented generic programming,
* implementation of generic programming languages,
* static and dynamic analyses of generic programs,
* and so on.

Program Committee
-

Jeremiah Willcock (co-chair), Indiana University
Jacques Carette (co-chair), McMaster University
Florian Rabe, Jacobs University Bremen
Emilie Balland, INRIA Bordeaux
Jeremy Siek, University of Colorado, Boulder
Gabriel Dos Reis, Texas AM University
Christophe Raffalli, Savoie University
Anya Helene Bagge, Universitetet i Bergen
Tiark Rompf, Ecole Polytechnique Federale de Lausanne
Andreas Abel, Ludwig-Maximilians-Universitat Munchen
Edward Kmett, SP Capital IQ
William Cook, University of Texas, Austin

Proceedings and Copyright
-

We plan to have formal proceedings, published by the ACM.  Authors must
transfer copyright to ACM upon acceptance (for government work, to the
extent transferable), but retain various rights
(http://www.acm.org/publications/policies/copyright_policy). Authors are
encouraged to publish auxiliary material with their paper (source code,
test data, etc.); they retain copyright of auxiliary material.

Submission details
--

Deadline for submission: Friday2013-06-14
Notification of acceptance:  Wednesday 2013-07-11
Final submission due:Tuesday   2013-07-25
Workshop:Sunday2013-09-28

Papers should be submitted via EasyChair at

  https://www.easychair.org/conferences/?conf=wgp2013

Submitted papers should be in portable document format (PDF), formatted
using the ACM SIGPLAN style guidelines (two-column, 9pt). The length is
restricted to 12 pages.

Travel Support
--

Student attendees with accepted papers can apply for a SIGPLAN PAC grant
to help cover travel expenses. PAC also offers other support, such as
for child-care expenses during the meeting or for travel costs for
companions of SIGPLAN members with physical disabilities, as well as for
travel from locations outside of North America and Europe. For details
on the PAC program, see its web page (http://www.sigplan.org/PAC.htm).

History of the Workshop on Generic Programming
--

Earlier Workshops on Generic Programming have been held in

  * Copenhagen, Denmark 2012 (affiliated with ICFP12),
  * Tokyo, Japan 2011 (affiliated with ICFP11),
  * Baltimore, Maryland, US 2010 (affiliated with ICFP10),
  * Edinburgh, UK 2009 (affiliated with ICFP09),
  * Victoria, BC, Canada 2008 (affiliated with ICFP),
  * Portland 2006 (affiliated with ICFP),
  * Ponte de Lima 2000 (affiliated with MPC),
  * Marstrand 1998 (affiliated with MPC).

Furthermore, there were a few informal workshops

  * Utrecht 2005 (informal workshop),
  * Dagstuhl 2002 (IFIP WG2.1 Working Conference

Re: [Haskell-cafe] Excercise on tagless final interpreters

2013-03-21 Thread Jacques Carette

On 13-03-21 06:32 AM, matteo vezzola wrote:

I'm playing with tagless final interpreters reading [1], using a very simple 
language:


class Ints repr where
 int :: Integer - repr Integer
 (.+.) :: repr Integer - repr Integer - repr Integer
 (.*.) :: repr Integer - repr Integer - repr Integer
 (.-.) :: repr Integer - repr Integer
 (.=.) :: repr Integer - repr Integer - repr Bool
newtype P repr t = P { unP :: Bool - repr t }
instance Ints repr = Ints (P repr) where
 int n = P $ \ s - if s then int n else (.-.) (int n)
 (.-.) n = P $ unP n . not
 n .+. m = P $ \ s - unP n s .+. unP m s
 n .*. m = P $ \ s - unP n s .*. unP m s
 n .=. m = P $ \ s - unP n s .=. unP m s

After pushing down negations I'd like to distribute (.*.) over (.+.). [1] 
leaves it as an exercise, so it can't be that hard, but I don't get it...

Anyone knows how I could do it?

[1]: http://okmij.org/ftp/tagless-final/course/lecture.pdf


thanks,


It is exactly the same idea: you use a context to track whether you have 
something (a multiplication) waiting to be distributed.  It is a tad 
more involved because you need to track more than a single bit of 
information.


Write it out: draw two ASTs, one where there is something to distribute, 
and another where there isn't, put yourself in the position of the 
addition, and think what information would I need now to perform the 
distribution.  Once you've figured that out, the rest is 
straightforward.  You do need to figure out the non-distribution case as 
well, otherwise you'll find yourself pushing a context too far and get 
wrong answers.


Jacques


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Workshop on Generic Programming

2013-03-18 Thread Jacques Carette
Dear list subscribers,  apologies if you receive multiple copies of this 
CFP.


==
CALL FOR PAPERS

   WGP 2013

9th ACM SIGPLAN Workshop on Generic Programming
  Boston, Massachusetts, USA
  Saturday, September 29th, 2013

http://www.wgp-sigplan.org/2013

  Co-located with the
International Conference on Functional Programming (ICFP 2013)
==


Goals of the workshop
-

Generic programming is about making programs more adaptable by making
them more general. Generic programs often embody non-traditional kinds
of polymorphism; ordinary programs are obtained from them by suitably
instantiating their parameters. In contrast with normal programs, the
parameters of a generic program are often quite rich in structure; for
example they may be other programs, types or type constructors, class
hierarchies, or even programming paradigms.

Generic programming techniques have always been of interest, both to
practitioners and to theoreticians, and, for at least 20 years,
generic programming techniques have been a specific focus of research
in the functional and object-oriented programming communities. Generic
programming has gradually spread to more and more mainstream
languages, and today is widely used in industry. This workshop brings
together leading researchers and practitioners in generic programming
from around the world, and features papers capturing the state of the
art in this important area.

We welcome contributions on all aspects, theoretical as well as
practical, of

* generic programming,
* programming with (C++) concepts,
* meta-programming,
* programming with type classes,
* programming with modules,
* programming with dependent types,
* type systems for generic programming,
* polytypic programming,
* adaptive object-oriented programming,
* component-based programming,
* strategic programming,
* aspect-oriented programming,
* family polymorphism,
* object-oriented generic programming,
* implementation of generic programming languages,
* static and dynamic analyses of generic programs,
* and so on.

Program Committee
-

Jeremiah Willcock (co-chair), Indiana University
Jacques Carette (co-chair), McMaster University
Florian Rabe, Jacobs University Bremen
Emilie Balland, INRIA Bordeaux
Jeremy Siek, University of Colorado, Boulder
Gabriel Dos Reis, Texas AM University
Christophe Raffalli, Savoie University
Anya Helene Bagge, Universitetet i Bergen
Tiark Rompf, Ecole Polytechnique Federale de Lausanne
Andreas Abel, Ludwig-Maximilians-Universitat Munchen
Edward Kmett, SP Capital IQ
William Cook, University of Texas, Austin

Proceedings and Copyright
-

We plan to have formal proceedings, published by the ACM.  Authors must
transfer copyright to ACM upon acceptance (for government work, to the
extent transferable), but retain various rights
(http://www.acm.org/publications/policies/copyright_policy). Authors are
encouraged to publish auxiliary material with their paper (source code,
test data, etc.); they retain copyright of auxiliary material.

Submission details
--

Deadline for submission: Friday2013-06-14
Notification of acceptance:  Wednesday 2013-07-11
Final submission due:Tuesday   2013-07-25
Workshop:Sunday2013-09-28

Papers should be submitted via EasyChair at

https://www.easychair.org/conferences/?conf=wgp2013

Submitted papers should be in portable document format (PDF), formatted
using the ACM SIGPLAN style guidelines (two-column, 9pt). The length is
restricted to 12 pages.

Travel Support
--

Student attendees with accepted papers can apply for a SIGPLAN PAC grant
to help cover travel expenses. PAC also offers other support, such as
for child-care expenses during the meeting or for travel costs for
companions of SIGPLAN members with physical disabilities, as well as for
travel from locations outside of North America and Europe. For details
on the PAC program, see its web page (http://www.sigplan.org/PAC.htm).

History of the Workshop on Generic Programming
--

Earlier Workshops on Generic Programming have been held in

  * Copenhagen, Denmark 2012 (affiliated with ICFP12),
  * Tokyo, Japan 2011 (affiliated with ICFP11),
  * Baltimore, Maryland, US 2010 (affiliated with ICFP10),
  * Edinburgh, UK 2009 (affiliated with ICFP09),
  * Victoria, BC, Canada 2008 (affiliated with ICFP),
  * Portland 2006 (affiliated with ICFP),
  * Ponte de Lima 2000 (affiliated with MPC),
  * Marstrand 1998 (affiliated with MPC).

Furthermore, there were a few informal workshops

  * Utrecht 2005 (informal workshop

Re: [Haskell-cafe] monadic DSL for compile-time parser generator, not possible?

2013-03-12 Thread Jacques Carette

On 13-03-12 04:06 PM, Jeremy Shaw wrote:

It would be pretty damn cool if you could create a data type for
generically describing a monadic parser, and then use template haskell
to generate a concrete parser from that data type. [...]
I would like to suggest that while it would be cool, it is
impossible.


Impossibility proofs are notoriously difficult.  You showed that this 
approach:



data ParserSpec a where
 AnyChar :: ParserSpec Char
 Return  :: a - ParserSpec a
 Join:: ParserSpec (ParserSpec a) - ParserSpec a
 FMap:: (a - b) - ParserSpec a - ParserSpec b


does not work.  The flaw is indeed in FMap.  It should not take a 
function as first argument, but rather a *description* of a function 
(the same way ParserSpec gives you a description of a parser).  Then you 
can make it work, if your 'description' language is adequate.


For some strange reason, I am biased towards 'finally tagless' 
descriptions, but YMMV.


Jacques

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compilers: Why do we need a core language?

2012-11-23 Thread Jacques Carette

On 22/11/2012 11:52 AM, Brandon Allbery wrote:
On Thu, Nov 22, 2012 at 7:56 AM, Jacques Carette care...@mcmaster.ca 
mailto:care...@mcmaster.ca wrote:


On 20/11/2012 6:08 PM, Richard O'Keefe wrote:

On 21/11/2012, at 4:49 AM, c...@lavabit.com
mailto:c...@lavabit.com wrote:

Well, I don't know. Would it save some time? Why bother
with a core language?

For a high level language (and for this purpose, even Fortran
66 counts as
high level) you really don't _want_ a direct translation
from source code
to object code.  You want to eliminate unused code and you
want to do all
sorts of analyses and improvements.  It is *much* easier to do
all that to
a small core language than to the full source language.


Actually, here I disagree.  It might be much 'easier' for the
programmers to do it for a small core language, but it may turn
out to be much, much less effective.  I 'discovered' this when
(co-)writing a partial evaluator for Maple: 



You're still using a core language, though; just with a slightly 
different focus than Haskell's.  I already mentioned gcc's internal 
language, which similarly is larger (semantically; syntactically it's 
sexprs).  What combination is more appropriate depends on the language 
and the compiler implementation.


Right, we agree: it is not 'core language' I disagreed with, it is 
'smaller core language'.  And we also agree that smaller/larger depends 
on the eventual application.  But 'smaller core language' is so 
ingrained as conventional wisdom that I felt compelled to offer 
evidence against this bit of folklore.


Jacques

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compilers: Why do we need a core language?

2012-11-23 Thread Jacques Carette

On 22/11/2012 7:37 PM, Richard O'Keefe wrote:

On 23/11/2012, at 1:56 AM, Jacques Carette wrote:

Actually, here I disagree. It might be much 'easier' for the 
programmers to do it for a small core language, but it may turn out 
to be much, much less effective. I 'discovered' this when 
(co-)writing a partial evaluator for Maple: we actually made our 
internal language *larger*, so that we could encode more invariants 
syntactically. This ended up making our jobs considerably easier, 
because we did not have to work so hard on doing fancy analyses to 
recover information that would otherwise have been completely 
obvious. Yes, there were a lot more cases, but each case was 
relatively easy; the alternative was a small number of extremely 
difficult cases. 

I don't think we do disagree.  We are talking about the same thing:
``not hav[ing] to work so hard on doing fancy analyses''.
The key point is that an (abstract) syntax *designed for the compiler*
and a syntax *designed for programmers* have to satisfy different
design goals and constraints; there's no reason they should be the same.


I must have mis-interpreted what you said then.  We definitely agree on 
this key point.


Jacques



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compilers: Why do we need a core language?

2012-11-23 Thread Jacques Carette

On 23/11/2012 9:59 AM, Mike Meyer wrote:
[...] I have to ask if your core language for Maple was larger than 
Maple? 


Yes. Maple 10 had 62 cases in its AST, we had 75 (p.13 of [1])

Jacques

[1] http://www.cas.mcmaster.ca/~carette/publications/scp_MaplePE.pdf

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compilers: Why do we need a core language?

2012-11-22 Thread Jacques Carette

On 20/11/2012 6:08 PM, Richard O'Keefe wrote:

On 21/11/2012, at 4:49 AM, c...@lavabit.com wrote:

Well, I don't know. Would it save some time? Why bother with a core 
language? 

For a high level language (and for this purpose, even Fortran 66 counts as
high level) you really don't _want_ a direct translation from source code
to object code.  You want to eliminate unused code and you want to do all
sorts of analyses and improvements.  It is *much* easier to do all that to
a small core language than to the full source language.


Actually, here I disagree.  It might be much 'easier' for the 
programmers to do it for a small core language, but it may turn out to 
be much, much less effective.  I 'discovered' this when (co-)writing a 
partial evaluator for Maple: we actually made our internal language 
*larger*, so that we could encode more invariants syntactically.  This 
ended up making our jobs considerably easier, because we did not have to 
work so hard on doing fancy analyses to recover information that would 
otherwise have been completely obvious.  Yes, there were a lot more 
cases, but each case was relatively easy; the alternative was a small 
number of extremely difficult cases.


Jacques

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Does this have a name?

2012-11-05 Thread Jacques Carette

The function
 app1 f x = f = ($ x)
or equivalently
 app2 f x = join (f * pure x)

with type  Monad m = m (a - m b) - a - m b ?
Hoogle did not help.

Jacques

PS: a nice point-free version would be appreciated as well.  I can 
easily change app1 and app2 myself to point-free with enough 
applications of flip, . and $, but none of those are 'nice'.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Haskell] [ANN] GenCheck - a generalized property-based testing framework

2012-06-23 Thread Jacques Carette

On 12-06-23 04:26 AM, Roman Cheplyaka wrote:

Hi,

1. Minor correction for your tutorial: reverse . reverse = id is
called the involution property, not idempotency.


Indeed!  Fixed, thanks.


2. Writing haddock documentation would definitely increase the chances
for GenCheck wide adoption.


Absolutely - I am working on that now.  It was always the plan, but we 
were sitting on GenCheck for too long, so I decided to push out an early 
0.1 release and fix that up for 0.2


Jacques


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Haskell] [ANN] GenCheck - a generalized property-based testing framework

2012-06-23 Thread Jacques Carette

On 12-06-23 04:38 AM, José Pedro Magalhães wrote:



On Tue, Jun 19, 2012 at 4:04 PM, Jacques Carette care...@mcmaster.ca 
mailto:care...@mcmaster.ca wrote:


User beware: this is gencheck-0.1, there are still a few rough
edges.  We plan to add a Template Haskell feature to this which
should make deriving enumerators automatic for version 0.2.


Can you provide me a quick pointer into what methods need to be 
generated automatically?


Sure.  Given a data definition such as
data Zipper a = Zip ![a] ![a] deriving (Eq,Show)
(where the strictness annotations are not relevant), one would want to 
automatically generate


instance Enumerated Zipper where
   enumeration = eMemoize $  eProd Zip (eList A) (eList A)

and for
data BinTree a = BTNode a | BTBr (BinTree a) (BinTree a)

generate
instance Enumerated (BinTree Label) where
  enumeration = eMemoize $ eSum (eNode (BTNode A))
(eProd BTBr eBinTree eBinTree)

Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Haskell] [ANN] GenCheck - a generalized property-based testing framework

2012-06-20 Thread Jacques Carette

On 20/06/2012 6:56 AM, Henning Thielemann wrote:
QuickCheck is Haskell-98 and thus is very portable. I see that 
GenCheck needs some more extensions - type families, multi-parameter 
type classes, what else?


FlexibleContexts and FlexibleInstances.

However, I am fairly sure that the multi-parameter type classes are not 
in fact needed.  Certainly a branch of the current implementation could 
easily be done that removes these without harming the functionality 
currently implemented, although it might harm some of the 
extensibility.  And I think most of the uses of 
FlexibleContexts/Instances are tied to these MPTCs.


The one use of type families is to implement a view or, if you prefer, 
the 'get' part of a lens.  With type families, this is extremely 
elegant.  If portability is really important, this too could probably be 
branched into an implementation that does this otherwise.


I am actively working on refactoring the LabelledPartition MPTC into 
simpler pieces, but you'll have to wait until gencheck-0.2 for that.


Jacques

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] [ANN] GenCheck - a generalized property-based testing framework

2012-06-19 Thread Jacques Carette
Test.GenCheck is a Haskell library for /generalized proposition-based 
testing/. It simultaneously generalizes *QuickCheck* and *SmallCheck*.


Its main novel features are:

 * introduces a number of /testing strategies/ and /strategy combinators/
 * introduces a variety of test execution methods
 * guarantees uniform sampling (at each rank) for the random strategy
 * guarantees both uniqueness and coverage of all structures for the
   exhaustive strategy
 * introduces an /extreme/ strategy for testing unbalanced structures
 * also introduces a /uniform/ strategy which does uniform sampling
   along an enumeration
 * allows different strategies to be mixed; for example one can
   exhaustively test all binary trees up to a certain size, filled with
   random integers.
 * complete separation between properties, generators, testing
   strategies and test execution methods

The package is based on a lot of previous research in combinatorics 
(combinatorial enumeration of structures and the theory of Species), as 
well as a number of established concepts in testing (from a software 
engineering perspective). In other words, further to the features 
already implemented in this first release, the package contains an 
extensible, general framework for generators, test case generation and 
management. It can also be very easily generalized to cover many more 
combinatorial structures unavailable as Haskell types.


The package also provides interfaces for different levels of usage. In 
other words, there is a 'simple' interface for dealing with 
straightforward testing, a 'medium' interface for those who want to 
explore different testing strategies, and an 'advanced' interface for 
access to the full power of GenCheck.


See http://hackage.haskell.org/package/gencheck for further details.

In the source repository (https://github.com/JacquesCarette/GenCheck), 
the file tutorial/reverse/TestReverseList.lhs shows the simplest kinds 
of tests (standard and deep for structures, or base for unstructured 
types) and reporting (checking, testing and full report) for the 
classical list reverse function. The files in tutorial/list_zipper show 
what can be done with the medium level interface (this tutorial is 
currently incomplete). The brave user can read the source code of the 
package for the advanced usage -- but we'll write a tutorial for this 
too, later.


User beware: this is gencheck-0.1, there are still a few rough edges.  
We plan to add a Template Haskell feature to this which should make 
deriving enumerators automatic for version 0.2.


Jacques and Gordon

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Haskell] [ANN] GenCheck - a generalized property-based testing framework

2012-06-19 Thread Jacques Carette

[Only cc:ing cafe]

There are definite similarities, yes - I only became aware of 
testing-feat very recently.   You seem to have concentrated more on 
efficiency, while we have focused more on the high-level modular design 
and on strategies.  We should probably merge our efforts, if you are 
willing.


I mistakenly build the release to upload to hackage from an experimental 
branch (which is currently broken).  The sources on github work.  I'll 
update gencheck shortly to fix this, sorry.


Jacques

On 12-06-19 12:58 PM, Jonas Almström Duregård wrote:

HI Jacques,

This looks very similar to the recently released testing-feat library:
http://hackage.haskell.org/package/testing-feat-0.2

I get a build error on the latest platform:

Test\GenCheck\Base\LabelledPartition.lhs:126:3:
 The equation(s) for `new' have two arguments,
 but its type `[a] -  Map k a' has only one
 In the instance declaration for `LabelledPartition (Map k) a'

Regards,
Jonas




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: signed-multiset-0.1

2012-04-19 Thread Jacques Carette

On 19/04/2012 4:10 AM, Stefan Holdermans wrote:

Release early, release often. Here's a second version:

   http://hackage.haskell.org/package/signed-multiset-0.2



Very cool.

In the literature, we have found [1] that these are sometimes also 
called 'hybrid sets'.  They do have some rather interesting 
applications, at least when paired up with an appropriate notion of 
partition (shameless plug warning).


Jacques

PS: the Haddock did not work for 0.2, but did for 0.1, you might want to 
look into that.


[1] http://dl.acm.org/citation.cfm?id=1894501
but also 
http://www.cas.mcmaster.ca/~carette/publications/calculemus10-final.pdf
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why were unfailable patterns removed and fail added to Monad?

2012-01-20 Thread Jacques Carette

On 19/01/2012 10:19 PM, Edward Z. Yang wrote:

  In other words,
MonadZero has no place in dealing with pattern match failure!

I completely agree.  See Bimonadic semantics for basic pattern matching 
calculi [1] for an exploration of just that.  In the language of that 
paper, the issue is that there is a monad of effects for actions, and a 
monad of effects for pattern matching, and while these are very lightly 
related, they really are quite different.  By varying both monads, one 
can easily vary through a lot of different behaviour for 
pattern-matching as found in the literature.


I should add that if we had known about some of the deeper structures of 
pattern matching (as in Krishnaswami's Focusing on Pattern Matching [2], 
published 3 years *later*), we could have simplified our work.


Jacques

[1] 
http://www.cas.mcmaster.ca/~kahl/Publications/Conf/Kahl-Carette-Ji-2006a.html

[2] http://www.cs.cmu.edu/~neelk/pattern-popl09.pdf

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Quickcheck research

2011-11-23 Thread Jacques Carette

On 11-11-23 08:28 PM, Jason Dagit wrote:

On a similar line of reasoning, I've wondered if Perlin style noise
generation could be applied to get a sort of fuzzing effect.  This
would be more interesting for cases where writing instances of
arbitrary is hard to do but test cases do exist.  Apply some sort of
pseudo-random noise to your examples and see if your properties still
hold.  I could see this having applications in parsers.

As far as I can tell, no one has used Perlin noise on algebraic
structures.  It seems to have only been applied to real valued spaces.
  Imagine having a parse tree then applying noise to the structure of
the tree then unparsing the tree back to concrete syntax.  You're
making the structure noisy instead of just fussing the concrete syntax
directly (which should increase the frequency that you change the
shape/meaning instead of just changing the tokens in the parse tree).



Interesting idea!  With the strategy based unified 
Quickcheck/Smallcheck that we're finishing up, it would be quite easy to 
program that as a new generation strategy and try it.


We've already got Boltzmann sampling on the list of things to look at in 
the future.


Jacques


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Quickcheck research

2011-11-22 Thread Jacques Carette

On 22/11/2011 12:42 PM, wren ng thornton wrote:


Something I think would be nice is to see full integration between 
SmallCheck and QuickCheck. In particular, I'd like to use SmallCheck 
to exhaustively search the small cases, and then use QuickCheck in a 
way that ensures that it only tests on things larger than the ones 
which have already been tested.


Actually, (Gordon Uszkay and I) have already done that.  We are hoping 
to make a release of this in November.


Jacques


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Standard 'decorator' class?

2011-09-23 Thread Jacques Carette

Has a class like

class Decoration t where
  type Datum t :: *
  value :: t - Datum t

been defined in any of the haskell packages on hackage?  [There are so 
many variants on naming, it is rather difficult to search for this]


Jacques

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Post-doc position

2011-08-23 Thread Jacques Carette
People interested in (typed) code generation might be interested (or 
know people who are) in the following post-doc position. While the 
'topic' is design for multiple visual scales, the tools used in practice 
will be DSLs and (typed) code generators. We use Haskell for 
implementing all of our DSLs (see [1] for details of part of the ongoing 
work).


Feel free to contact me for further details.
Jacques
[1] http://www.cas.mcmaster.ca/~carette/SAGA/

[Sorry for the bad formatting, this 'ad' was unfortunately born as a 
Word document. Not my choice.]

=

Council on Library and Information Resources (CLIR) Postdoctoral 
Fellowship in Design for Multiple Visual Scales at McMaster University


The Computing and Software Department and the University Library at 
McMaster University in Hamilton, Ontario, Canada invite applications for 
a two-year CLIR Postdoctoral Fellowship in software design, with an 
emphasis on games, for multiple visual scales, from the iPod Touch to 
60inch high-resolution screen panels. The Fellowship is part of the 
Gaming Scalability Environment (G-ScalE) project involving Dr. Jacques 
Carette (Computing and Software, Faculty of Engineering), Dr. Andrew 
Mactavish (Multimedia, Faculty of Humanities), and Jeff Trzeciak 
(University Librarian). The fellow will start immediately and report to 
Dr. Carette but also work closely with the library on certain projects.


Duties
• Work with Dr. Carette and his students to identify all of the issues 
relating to usability at different scales, then help design (and 
implement) a set of domain-specific languages which capture the 
constraints of user-interface features with respect to scale.
o From the psychology and user-interface literature, identify cognitive 
issues relating to scale perception, and capture these in a formal 
model. Examples: Fitts’ Law, Hick’s Law and the Steering Law;
o Identify examples of such concerns, and pre-existing solutions, from 
existing software which works at multiple scales; put an emphasis on 
solutions concerning gaming software;
o Relate these models and solutions to design constraints for features 
in software;
o Embody these design constraint into a set of domain specific 
languages; and

o Provide interpreters for these languages.
• The above work will be done in conjunction with a number of other 
researchers and students, and is expected to lead to several 
peer-reviewed publications, as well as working software.
• The candidate will be expected to teach one course (in the CAS 
department).

• The candidate will be expected to work with the library to:
o Identify the current and future library needs of Engineering faculty 
and students;
o Assess current state of library resources, services and facilities in 
support of Engineering;

o Make recommendations to the library administration for improvement;
o Promote greater awareness of library resources, services and 
facilities within Engineering;
o Promote greater awareness within the library of the faculty’s teaching 
and research priorities ; and

o Provide support as needed to students and faculty within Engineering.

Required Skills:
• Ph.D. in Computer Science or Software Engineering is preferred, but a 
Ph.D. in psychology, human factors, or mathematics accompanied by a 
demonstrable expertise in software development and mathematical modelling.

• Interest in multi-disciplinary work.
• Capable of understanding the requirements of faculty and students in 
Engineering with regards to library resources, services and facilities.


How to Apply
If you have received your Ph.D. within the last five years and are 
interested in this fellowship, please email your curriculum vitae to 
abis...@clir.org.
The Council on Library and Information Resources (CLIR) is an 
independent, nonprofit organization that forges strategies to enhance 
research, teaching, and learning environments in collaboration with 
libraries, cultural institutions, and communities of higher learning. 
CLIR promotes forward-looking collaborative solutions that transcend 
disciplinary, institutional, professional, and geographic boundaries in 
support of the public good.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] llvm package on Windows

2011-04-03 Thread Jacques Carette
I am trying to install llvm-0.9.0.1 on Windows (through cygwin), on top 
of a clean install of Haskell Platform 2010.2.0.0.


I do
cabal install llvm --configure-option --with-llvm-prefix=C:\\llvm
(which is where I unpacked llvm-gcc4.2-2.8-x86-mingw43.tar.bz2).  This 
fails with


.mingw\bin\ld.exe: cannot find -lLLVMXCoreAsmPrinter
(which is indeed the first -l option on the gcc line).

[I also tried the llvm-ht package, same issue; I can't tell which of the 
2 packages is 'best' from the online doc]


Perhaps I need to 'install' llvm more thoroughly?  [Most of the 
instructions online seem to assume you want to use llvm-gcc, which is 
not my case]


Any clues?

Jacques

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Language.C++ ?

2010-11-27 Thread Jacques Carette
Is anyone working on a Language.C++ module, similar to the wonderful 
Language.C ?  Right now just the syntax and a pretty-printer would be 
really nice to have...


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Musings on type systems

2010-11-21 Thread Jacques Carette

On 21/11/2010 8:33 PM, wren ng thornton wrote:

On 11/20/10 6:33 AM, Ketil Malde wrote:

I guess this makes [X] an exponential type, although I don't remember
seeing that term :-)


Nope. (a-b) is the exponential type, namely |a-b| = |b|^|a|.

[_] is just a solution to the recursive equation [x] = 1 + x*[x].


So that [X] correspond to the 'type' 1/(1-X).  This is sometimes called 
the 'asterate' in other contexts, since it corresponds to the Kleene 
star.  The nice thing about the 'asterate' is that it exists for many 
many semi-rings -- and one can use it as a replacement for both 'minus' 
and 'exact division' in the Gaussian Elimination algorithm.


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] vector-space and standard API for vectors

2010-11-04 Thread Jacques Carette

On 02/11/2010 8:16 PM, Conal Elliott wrote:

 Vector (Complex a) is a vector with respect to both 'a' and 'Complex a'.

Even worse, () is a vector w.r.t. *every* scalar type.


Why is this bad?  () is the canonical 0-dimensional vector space.  
0-dimensional vector spaces are very useful because they allow quite a 
number of linear algebra algorithms to be stated ``inductively'' with no 
funny special cases.


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] vector-space and standard API for vectors

2010-10-23 Thread Jacques Carette

 On 23/10/2010 5:19 PM, Daniel Peebles wrote:
Just out of curiosity, why do you (and many others I've seen with 
similar proposals) talk about additive monoids? are they somehow 
fundamentally different from multiplicative monoids? 


People usually use additive notation for commutative monoids, and 
multiplicative notation for generic monoids.  It's a convention, nothing 
else.  Otherwise, of course they are isomorphic.


When I was playing with building an algebraic hierarchy, I picked a 
neutral operator for my monoids (I actually started at magma, but 
it's the same thing) and then introduced the addition and 
multiplication distinction at semirings, as it seemed pointless to 
distinguish them until you have a notion of a distributive law between 
the two.


How you do this really depends on what features you have for 
naming/renaming, how notions of inheritance/subtyping work, etc.


Having multiple names for structures which are, in fact, isomorphic, 
turns out to be really convenient when you want to combine the 
structures without duplicating declarations.  Let me stress that again: 
'really convenient'.  Not required, manditory, or anything else like that.


Of course, all we really require is any Turing complete language.  So if 
'require' is the main criterion, we should all still be programming in 
assembler.  There really is a reason we're having this discussion on the 
Haskell-cafe mailing list...


Jacques

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Question about memory usage

2010-08-17 Thread Jacques Carette

Daniel Fischer wrote:

On Aug 16, 2010, at 6:03 PM, Jacques Carette wrote:


Any sequence of numbers given by a linear recurrence equation with
constant coefficients can be computed quickly using asymptotically
efficient matrix operations.  In fact, the code to do this can be
derived automatically from the recurrence itself.
  

This is neat. Is it always M^n for some matrix M? How does it work?



Yes, it's always M^n.

If the relation is

a_n = c_1*a_(n-1) + ... + c_k*a_(n-k)

you have the k×k matrix

c_1 c_2 ... c_(k-1) c_k
1 0 ...0 0
0 1 0 ... 0 0
0 0 1 0 ... 0
0 ... 0 1 0 0
0 ... 0 0 1 0

to raise to the n-th power, 


However, for large k, this isn't particularly efficient since standard 
matrix multiplication is O(k^3).
I said asymptotically efficient matrix multiplication, which in 
practice is between O(k^2.7) and O(k^2.5) depending on the implementation.


 These matrices have a special structure 
that allows doing a multiplication in O(k^2).

You might want to look into the Cayley-Hamilton theorem for the latter.
  
Special multiplication by M is indeed O(k^2), but M^n is going to be 
dense (if the order of the recurrence is k, then M^n is fully dense for 
n=k).  And the interesting part is getting high iterates out, not 
having super large recurrences.


In any case, this really doesn't matter in practice since k tends to be 
*fixed*, so it is really a 'small constant'.  The real advantage comes 
through doing *binary powering*, not the matrix arithmetic.


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Question about memory usage

2010-08-16 Thread Jacques Carette

Since we're off-topic...

Any sequence of numbers given by a linear recurrence equation with 
constant coefficients can be computed quickly using asymptotically 
efficient matrix operations.  In fact, the code to do this can be 
derived automatically from the recurrence itself.


Here is what you get for Fibonacci (using Maple):
 re := {fib(n+2) = fib(n+1) + fib(n)}; inits := {fib(0) = 1, fib(1) = 1};
{fib(n + 2) = fib(n + 1) + fib(n)}
 {fib(0) = 1, fib(1) = 1}
 gfun[rectoproc](re union inits, fib(n));
proc (n)
   local i1, loc0, loc1, loc2, tmp2, tmp1, i2;
   if n = 44 then
   loc0 := 1; loc1 := 1;
   if n  0 then error index must be %1 or greater, 0
   elif n = 0 then return loc0
   elif n = 1 then return loc1
   end if;
   for i1 to n-1 do loc2 := loc0+loc1; loc0 := loc1; loc1 := loc2 
end do;

   loc1
   else
   tmp2 := Matrix(2, 2, {(1, 1) = 1, (1, 2) = 1, (2, 1) = 1, (2, 2) 
= 0});

   tmp1 := Vector(2, {(1) = 1, (2) = 1});
   i2 := convert(n-1, base, 2);
   if i2[1] = 1 then tmp1 := Vector(2, {(1) = 2, (2) = 1}) end if;
   for i1 in subsop(1 = (), i2) do
   tmp2 := LinearAlgebra:-MatrixMatrixMultiply(tmp2, tmp2);
   if i1 = 1 then tmp1 := 
LinearAlgebra:-MatrixVectorMultiply(tmp2, tmp1) end if

   end do;
   tmp1[1]
   end if
end proc

Direct computation is done for small n, and then asymptotically fast 
linear algebra is used for larger n.


This should be a nice Template Haskell exercise.

Jacques

Sebastian Fischer wrote:

[continuing off topic]

On Aug 16, 2010, at 3:13 PM, Daniel Fischer wrote:

You can calculate the n-th Fibonacci number in O(log n) steps using 
Integer

arithmetic to get correct results.


Yes, I was delighted when I saw this for the frist time. It works be 
computing


/1 1\^n
\1 0/

(This is meant to be a matrix to the nth power) by repeated squaring. 
Wikipedia knows the details:


http://en.wikipedia.org/wiki/Fibonacci_number#Matrix_form

My Haskell implementation of this approach is on Hackage:

http://hackage.haskell.org/package/fibonacci

and github:


http://github.com/sebfisch/fibonacci/blob/master/Data/Numbers/Fibonacci.hs 



With this implementation,  printing  the result of a call to, say

fib 1

takes *much* longer than  computing  it.

[almost on topic again]

I am a bit concerned about the memory usage. If you know how to fix 
it, please send me patches (ideally via github)!


Cheers,
Sebastian




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Difficulties with tagless - create primitives or compose them

2010-06-14 Thread Jacques Carette

Günther Schmidt wrote:
I have recently found something new that might also prove to be useful 
for EDSLs.

http://blog.sigfpe.com/2009/05/three-projections-of-doctor-futamura.html

Dan's blog post doesn't give any code or implementation but in a way 
it tackles the same problem, and since you also mention partial 
evaluation and transformation you might also find this interesting.


Actually, we tried to do the 2nd Futamura projection in tagless-final 
style -- and could not.  This paper

http://www.daimi.au.dk/~ko/papers/pldi142_rendel1.pdf [1]
documents why we were not able to: such self-embeddings need infinite 
type towers, which neither Haskell not O'Caml have.  We did have a 
work-around using both a object-language-level 'let' and a 
meta-language-level 'let', but it was unsatisfactory and, in the end, we 
cut that whole section out of the JFP paper.


It would be interesting to see if using the techniques of 
Atkey-Lindley-Yallop [2]

http://homepages.inf.ed.ac.uk/slindley/papers/unembedding.pdf
would make this easier.  I have not had a chance to try.

Jacques

[1]

@inproceedings{1542509,
author = {Rendel, Tillmann and Ostermann, Klaus and Hofer, Christian},
title = {Typed self-representation},
booktitle = {PLDI '09: Proceedings of the 2009 ACM SIGPLAN conference on 
Programming language design and implementation},
year = {2009},
isbn = {978-1-60558-392-1},
pages = {293--303},
location = {Dublin, Ireland},
doi = {http://doi.acm.org/10.1145/1542476.1542509},
publisher = {ACM},
address = {New York, NY, USA},
}

[2]

@inproceedings{1596644,
author = {Atkey, Robert and Lindley, Sam and Yallop, Jeremy},
title = {Unembedding domain-specific languages},
booktitle = {Haskell '09: Proceedings of the 2nd ACM SIGPLAN symposium on 
Haskell},
year = {2009},
isbn = {978-1-60558-508-6},
pages = {37--48},
location = {Edinburgh, Scotland},
doi = {http://doi.acm.org/10.1145/1596638.1596644},
publisher = {ACM},
address = {New York, NY, USA},
}



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Difficulties with tagless - create primitives or compose them

2010-06-13 Thread Jacques Carette
Excellent answer!  Splitting the Symantics class into pieces is one of 
the techniques that we didn't need for solving the original problem 
(tagless partial evaluation without resorting to fancy types) that set 
us on this track. Which is too bad, because it would have made a nice 
addition. The 'typecase pattern', on the other hand, was one of the 
explicit techniques we did use.


One of these days, Oleg, Ken and I need to write a follow-up on this, as 
the code contains a number of extra techniques (in the advanced partial 
evaluation parts) which were not explained in our paper.  And, of 
course, Oleg's added a number of refinements to the general technique 
[see his web site].


I must admit that I find it amusing that most people seem to use 
finally-tagless for writing interpreters, while we created it for 
partial evaluation and program transformation!  [It also allows you to 
do certain kinds of program analyses, but one needs additional 
techniques to make that comfortable - another thing to write up properly].


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Functions of type foo :: f a - g a

2010-05-13 Thread Jacques Carette




[I work with Gordon on this]

The problem domain is test frameworks, more specifically
proposition-based testing.  We are most of the way through unifying
QuickCheck and SmallCheck (as well as untangling the various 'stages'),
and are now generalizing.

The generator is obvious.  The 'reducer' would be a combination of
"running" a test and producing a summary of the tests run.  Right now,
we are exploring having the generator return a (potentially infinite)
'stream' of test cases, which frequently come with various gradings (as
in 'graded algebra') which induce what we referred to as 'chunks'.

Whether metamorphisms or hylomorphisms are best for our situation, that
remains to be explored.  But these are certainly excellent leads, thank
you.

Jacques

Edward Kmett wrote:
There is also a (naive) metamorphism combinator in my
category-extras library:
  
  http://hackage.haskell.org/packages/archive/category-extras/0.53.4/doc/html/Control-Morphism-Meta-Gibbons.html
  
Though it is definitely worth pursuing the optimizations that Gibbons
talks about in his very good spigot paper.
  
Another option might be to rephrase your problem into something
suitable to the Generator/Reducer framework in
  
  http://hackage.haskell.org/package/monoids-0.1.36
  
Since in many ways if f is a Generator and g is a Reducer, that is
exactly what my monoids api is all about. Though there I admit, I'd
need to know more about the problem domain.
  
-Edward Kmett
  
  On Thu, May 13, 2010 at 3:32 PM, Stephen
Tetley stephen.tet...@gmail.com
wrote:
  On
13 May 2010 20:25, Gordon J. Uszkay uszka...@mcmaster.ca wrote:
[SNIP]

  The f container is a potentially infinite stream of data obtained
from a generator, and I want to be able to control how much data is
extracted, so an eager 'fmap' won't be sufficient (an eager process
will be applied to the 'chunks' defined by g).

 I am going to purse the hylomorphism model, but am interested if
anyone else has any similar situations, and if using classes to manage
the interface is the right strategy.


Hi Gordon

From your description this sounds closer to Jeremy Gibbons
'metamorphism' than a hylomorphism:

http://www.comlab.ox.ac.uk/jeremy.gibbons/publications/
http://www.comlab.ox.ac.uk/oucl/work/jeremy.gibbons/publications/metamorphisms-scp.pdf

See also the Spigot / pi paper.

Best wishes

Stephen


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  
  
  
  

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
  




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Would it be evil to add deriving Typeable to newtype Q?

2010-05-09 Thread Jacques Carette

Mike Dillon wrote:

begin Ivan Lazar Miljenovic quotation:
  

I take it you haven't had the legal problems that DrIFT had when it used
to be called Derive?  http://www.dcs.gla.ac.uk/~nww/Derive/History.html



Looks
like they stopped selling it in June 2007, at least in the UK:


http://education.ti.com/educationportal/sites/UK/productDetail/uk_derive6.html

The other link I found redirected to a page that didn't mention Derive™
at all.
  


I can confirm that Derive, as a commercial product, no longer exists.  
[This comes from discussions with one of the original developers of 
Derive, as well as a developer at TI who worked on Derive until TI 
changed their computer algebra strategy]. 


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Seeking the correct quote

2010-04-28 Thread Jacques Carette

Bradford Larsen wrote:

I don't have the book handy (it was from the library), but I seem to
remember reading something along those lines in ``Datatype-Generic
Programming:  International Spring School, SSDGP 2006, Nottingham, UK,
April 24-27, 2006, Revised Lectures'', edited by Backhouse, Gibbons,
Hinze, and Jeuring.
  

The spirit is there in quotes like

The term ‘generic programming’ means different things to different people,
because they have different ideas about how to achieve the common goal 
of combining

flexibility and safety. To some people, it means parametric polymorphism;
to others, it means libraries of algorithms and data structures; to 
another group,
it means reflection and meta-programming; to us, it means polytypism, 
that is,

type-safe parametrization by a datatype 

and

Moreover, a parametrization is usually only called ‘generic’ 
programming if it
is of a ‘non-traditional’ kind; by definition, traditional kinds of 
parametrization

give rise only to traditional programming, not generic programming.
Therefore, ‘genericity’ is in the eye of the beholder, with beholders 
from different

programming traditions having different interpretations of the term.

But nothing 'snappy'. Ah well.

Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Seeking the correct quote

2010-04-27 Thread Jacques Carette
I have heard generic programming described tongue-in-cheek as the kind 
of polymorphism that a language does not (yet) have.  I find this 
description rather apt, and it matches fairly what I see called 
'generic' in various communities.  But who said this, where and when?


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Seeking the correct quote

2010-04-27 Thread Jacques Carette
Bradford Larsen wrote:
 On Tue, Apr 27, 2010 at 4:23 PM, Jacques Carette care...@mcmaster.ca wrote:
   
 I have heard generic programming described tongue-in-cheek as the kind of
 polymorphism that a language does not (yet) have.  I find this description
 rather apt, and it matches fairly what I see called 'generic' in various
 communities.  But who said this, where and when?
 
 I seem to
 remember reading something along those lines in ``Datatype-Generic
 Programming:  International Spring School, SSDGP 2006, Nottingham, UK,
 April 24-27, 2006, Revised Lectures'', edited by Backhouse, Gibbons,
 Hinze, and Jeuring.
I've got that book in my office - I'll check tomorrow.  I read the
preface before I posted on -cafe, and it wasnt' there.  I'll read
further in, see if I can spot it.

Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Seeking the correct quote

2010-04-27 Thread Jacques Carette
Sterling Clover wrote:
 I first encountered this quip on ltu:
 http://lambda-the-ultimate.org/node/1926#comment-23411

 However, that comment doesn't give a source either.
   
Probably where I remembered it from too.  I'll continue searching - it's
a good quote!

Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Metaprogramming in Haskell vs. Ocaml

2010-04-06 Thread Jacques Carette
One thing I should have mentionned - TH and camlp4 are really 
equivalents.  And camlp4 is as-typed-as TH (or not, depending on your 
point of view).  I am co-author of a camlp4 extension, and I must admit 
that coding in camlp4 was not enjoyable, while coding in metaocaml 
(eventually) is.


[I see that Nicolas Pouillard just answered the question about new type 
definitions, etc, as I was typing this answer]


Heinrich Apfelmus wrote:

It seems to me that metaocaml is more used as user annotated partial
evaluation?
  
I see most of my uses of metaocaml as code generation (building 'up' 
code from pieces by making explicit design choices) rather than 
classical partial evaluation (specializing code).  Others would argue 
that these two activities are really the same thing -- and I would not 
really disagree!  It's just that in my cases the 'specialization' is not 
driven by the partial evaluator at all, it is fully driven by the user 
[through explicit decisions of which modules/functors to use].  This 
requires a very different code organization, which is why I make the 
difference between the two approaches.


The Haskell analogy: I write tons and tons of typeclass-level 'code', 
including typeclasses for type constructors (and type constructor 
transformers and ...) but then I ensure that whenever these are used, it 
can be statically determined exactly which instance should be used, and 
100% of the dispatch overhead is eliminated AND I get full control over 
which methods get inlined in the final code.  It is that 'full control' 
aspect which gets you reliable efficiency from highly abstract code.


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Metaprogramming in Haskell vs. Ocaml

2010-04-05 Thread Jacques Carette

Don Stewart wrote:

I think we don't see as much metaprogramming because of other language
features -- laziness, operator syntax, and type classes -- make a bunch
of common designs work without needing metaprogramming.
  
While true, there are also 2 other reasons for meta-programmers are not 
all over Haskell:
1. efficiency nuts are already using C++ templates and don't see why 
they would switch,
2. people who care about types use a typed meta-language (like 
metaocaml) instead of an untyped template layer atop a (fantastic!) 
typed language.


Actually, people in the #2 camp (like me) are keeping a close eye on 
dependently-typed languages (like Idris [1]) where partial evaluation 
has been show to be particularly easy and effective.  I am eagerly (!) 
awaiting similar results from the Agda [2] camp.


I believe the world really is ready for a typed metaprogramming 
language.  I know I am.  And I would really like it if Haskell were that 
language.


Jacques

[1] http://www.cs.st-andrews.ac.uk/~eb/Idris/
[2] http://wiki.portal.chalmers.se/agda/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Metaprogramming in Haskell vs. Ocaml

2010-04-05 Thread Jacques Carette
Jason Dagit wrote:

 While true, there are also 2 other reasons for meta-programmers
 are not all over Haskell:
 1. efficiency nuts are already using C++ templates and don't see
 why they would switch,
 2. people who care about types use a typed meta-language (like
 metaocaml) instead of an untyped template layer atop a
 (fantastic!) typed language. 


 Are you implying that template haskell is not typed?
Indeed.  A template haskell meta-program is not typed at 'compile time'
of the meta-program, but at 'run-time' of the splices.  In other words,
you can't typecheck a TH program without running it.  You do get _some_
typing (because of the quotes have type Q Exp, Q [Dec] or Q Typ, but
that is rather minimalistic.

Compare with metaocaml where if you can compile you meta-program (i.e.
code generator), then you are guaranteed that it can only ever produce
valid, well-typed code.  Not so with TH, where you can easily generate
junk -- which GHC will promptly figure out  and give you an error.  But
only 'late' in the process, which means that it can be fairly difficult
to understand what the cause of the error really is.  It's not nearly as
bad as C++ templates, but it's well short of what metaocaml can do.

Don't get me wrong - as a programming language, Haskell is still my
favourite.  It's just as a meta-programming language that it's not. 
[Unfortunately, I'm writing a lot of meta-programs these days !]

Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Metaprogramming in Haskell vs. Ocaml

2010-04-05 Thread Jacques Carette
Casey McCann wrote:
 Not to speak for Jacques, 
:-)

and then you followed that up with a post with which I fully agree.

Jacques

 but my impression is that while TH itself is
 typed--it's just more Haskell after all--it doesn't do much to prevent
 you from generating code that is not well-typed. Or even well-formed,
 for that matter; my initial attempts to learn how to use TH produced
 quite a few that's impossible! errors from GHC (I do not think that
 word means what it thinks it means).

 There's also type-level metaprogramming, as in e.g. HList, which is
 almost completely untyped. I have some personal library code that
 implements a simple meta-type system and it's a huge, horrid, painful
 mess for something with an expressive power no better than System F.

 In contrast, MetaOCaml seems to be some variety of a multi-stage
 system where metaprogramming blends smoothly into regular
 programming with a single, consistent type ensuring type safety at all
 points.

 - C.
   

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] more thoughts on Finally tagless

2010-03-08 Thread Jacques Carette
Stephen Tetley wrote:
 The finally tagless style is an implementation of the TypeCase pattern
 (Bruno C. d. S. Oliveira and Jeremy Gibbons):
   
One part of our work does that, yes.

 The authors of the Finally Tagless note in the long version of their
 paper that the GADT TypeCase had some problems with non-exhaustive
 pattern matching (last paragraph, page 14) - if I'm understanding it
 correctly, in order to be total, the interpretation function stills
 needs to introduce pattern matching clauses in some circumstances for
 values that the GADT actually prevents occurring.
   
You understand correctly.  By using plain HM, augmented with either type
classes or functors (to pass a higher-order dictionary around), all the
redundant cases can be eliminated in a way that is transparent to the
type system. 

To me, the parametricity in the 'interpreter' buys you more than what
GADTs do.  This was most definitely unexpected.

Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] more thoughts on Finally tagless

2010-03-08 Thread Jacques Carette
Günther Schmidt wrote:
 In Olegs haskell implementation he is using classes mainly to model
 the syntax and instances to use for evaluators / compilers to allow
 multiple interpretations.
When there are 3 authors on a paper (and in the implementation file), it
is customary to acknowledge all 3, unless you have personal knowledge
that one particular person did the work.  In this case, the work was
very much joint between Oleg, Ken and I.

Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Finally tagless and abstract relational Algebra

2009-12-28 Thread Jacques Carette
Günther Schmidt wrote:
 My guess is that finally tagless style allows one to create a syntax
 without any initial dependency to an implementation. Ie. once one has
 created the syntax in this style one can then proceed to construct terms.
Yes.

 So this is my goal, create a syntax for relational algebra, express an
 abstract relational algorithm in this syntax and worry about
 implementation / compilation / evaluation *later*. But at least being
 able to express a correctly typed term.
Good plan.  But syntax design is hard, whatever style you choose.

 I presume I will need to employ HList at some point, but I'm not
 entirely certain where. Will I need it at the very beginning, as
 constrains in the syntax so that only correct abstract terms can be
 built, or will I need it in on of the interpreters / compilers later?
You will not need HList for constraining the syntax -- Haskell's type
system should already provide all you need to constrain the syntax.  In
fact, in tagless final style (rather than in initial style), for the
lambda calculus you don't even need GADTs to deal with exotic terms! 

Why do you think you'll have to use HList?  While HList is great, in
this particular instance, I don't think you'll need it.

Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Finally tagless and abstract relational Algebra

2009-12-28 Thread Jacques Carette
Günther Schmidt wrote:
 I do know that I could express my algorithms via list-comprehension or
 in a List Monad, all using tuples. And that would be concrete and
 grossly inefficient.
You should probably tell us what these algorithms accomplish, rather
than how one implementation goes.  From a higher-level view of what
you're trying to do [but not as high as saying 'implement abstract
relational algebra'], it will be easier to give concrete advice.

 So how would it be possible to express selecting /field/ b from
 /record/ x and field c from record y, creating record z, while making
 sure that record x does have field b and record y does have field c? I
 mean design a syntax for it?
Perhaps you should tell us why you think you need records at all, and
record sub-typing to boot.  You might well be right, but the
higher-level requirements will have a much bigger influence on the
design than anything else.

Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] lhs2tex, Haskell Platform and cygwin

2009-12-17 Thread Jacques Carette
I just downloaded a fresh copy of Haskell Platform, and successfully got 
cabal to update itself.  I installed smallcheck-0.4 successfully too.


But then I tried to install lhs2tex, which 'worked' until the point of 
saying:

Linking dist\build\lhs2TeX\lhs2TeX.exe ...
Installing executable(s) in C:\Program Files (x86)\Haskell\bin
setup.exe: The program mktexlsr is required but it could not be found

And I do have 'mktexlsr' in my cygwin PATH (where I was running 'cabal 
install lhs2tex').  It's true that mktexlsr isn't in my Windows PATH.  I 
would have thought someone else would have encountered this already, but 
my google searchers found other (long fixed) problems with lhs2tex under 
cygwin, but not this one.


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Restrictions on associated types for classes

2009-12-17 Thread Jacques Carette
Which is 'right' in a way, since in the language Conor defined, all 
definable terms are infinitely differentiable, and that can be 
inferred from the given rules.  That, in practice, you only need a 
finite number of them in any given computation is derivable from the 
instances, but is not a general theorem.


Jacques

PS: wait until someone wants to define resurgent functions, where you 
need ordinals to keep track of similar information!  Nice thing is that 
it's all still computable, you just have to work a fair bit harder to 
set up the correct machinery.  See papers by Salvy  Shackell, as well 
as van der Hoeven.


Simon Peyton-Jones wrote:

Hmm.  If you have
   class (Diff (D f)) = Diff f where

then if I have
f :: Diff f = ...
f = e
then the constraints available for discharging constraints arising from e are
Diff f
Diff (D f)
Diff (D (D f))
Diff (D (D (D f)))
...

That's a lot of constraints.  


Simon


| -Original Message-
| From: haskell-cafe-boun...@haskell.org 
[mailto:haskell-cafe-boun...@haskell.org] On
| Behalf Of Conor McBride
| Sent: 17 December 2009 14:48
| To: Haskell Cafe
| Subject: Re: [Haskell-cafe] Restrictions on associated types for classes
| 
| Hi all
| 
| On 17 Dec 2009, at 14:22, Tom Schrijvers wrote:
| 
|  class MyClass k where

|  type AssociatedType k :: *
| 
|  Is there a way of requiring AssociatedType be of class Eq, say?
| 
|  Have you tried:
| 
|  {-# LANGUAGE TypeFamilies #-}
|  {-# LANGUAGE FlexibleContexts #-}
| 
|  class Eq (AssociatedType k) = MyClass k where
|   type AssociatedType k :: *
| 
| I just got very excited about this. I'm supposed to be

| setting a test, but this is far more interesting. I tried
| this
| 
|   {-# LANGUAGE TypeFamilies, FlexibleContexts, EmptyDataDecls,

| TypeOperators #-}
| 
|   module DDD where
| 
|   class (Diff (D f)) = Diff f where

| type D f
| plug :: D f x - x - f x
| 
|   newtype K a x = K a deriving Show
| 
|   data Void

|   magic :: Void - a
|   magic x = x `seq` error haha
| 
|   instance Diff (K a) where

| type D (K a) = K Void
| plug (K c) x = magic c
| 
|   newtype I x = I x deriving Show
| 
|   instance Diff I where

| type D I = K ()
| plug (K ()) x = I x
| 
|   data (f :+: g) x = L (f x) | R (g x) deriving Show
| 
|   instance (Diff f, Diff g) = Diff (f :+: g) where

| type D (f :+: g) = D f :+: D g
| plug (L f') x = L (plug f' x)
| plug (R g') x = R (plug g' x)
| 
|   data (f :*: g) x = f x : g x deriving Show
| 
|   instance (Diff f, Diff g) = Diff (f :*: g) where

| type D (f :*: g) = (D f :*: g) :+: (f :*: D g)
| plug (L (f' : g)) x = plug f' x : g
| plug (R (f : g')) x = f : plug g' x
| 
| But I got this message
| 
| [1 of 1] Compiling DDD  ( DDD.lhs, interpreted )
| 
| DDD.lhs:5:2:

|  Cycle in class declarations (via superclasses):
|DDD.lhs:(5,2)-(7,28): class (Diff (D f)) = Diff f where {
|  type family D f; }
| Failed, modules loaded: none.
| 
| and now I have to go back to setting my class test.
| 
| Sorry for spam
| 
| Conor
| 
| ___

| Haskell-Cafe mailing list
| Haskell-Cafe@haskell.org
| http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
  


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] lhs2tex, Haskell Platform and cygwin

2009-12-17 Thread Jacques Carette

Stephen Tetley wrote:

Hi Jacques

Does it install properly by the runhaskell Setup.hs configure / build
/ install? I re-installed it this way a couple of weeks ago and it
never mentioned mktexlsr.
  
No, it doesn't.  I first run into issues with exceptions-related issues 
(as you did), which I solved via

runghc Setup configure --constraint=base4

But the install still failed on mktexlsr.  So I tried
runghc Setup configure --constraint=base4 
--with-mktexlsr=C:\\cygwin\\bin\\mktexlsr

runghc Setup build
runghc Setup install

but now I get a new error related to mktexlsr, namely
$ runghc Setup install
Installing executable(s) in C:\Program Files (x86)\Haskell\bin
Setup: \cygwin\bin\mktexlsr: runGenProcess: invalid argument (Exec 
format error)


mktexlsr is a /bin/sh script, so it's quite natural that an Exec on that 
would fail.  So I'm still stuck.


Thanks for the dos2unix pointer, that would have taken quite a while to 
figure out.


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] lhs2tex, Haskell Platform and cygwin

2009-12-17 Thread Jacques Carette

Some success now.

Reverting to building by hand, if I
1. edit config.ml to put  quotes around the name of the GHC executable 
(since it contains spaces AND () ),

2. edit Makefile to add -package base-3.0.3.1 on to the --make line,
things proceed through the build and install stage properly.

For doing things by-hand, it would be nice if the 'configure' script 
automatically put  quotes aroung the name of the GHC executable when 
generating config.mk.  I took a look to see if I could figure out a 
patch to do that, but it wasn't obvious enough.


Also, it would be nice if there were an lhs2tex version which worked 
out-of-the-box with base=4, since hacking the Makefile for that seems, 
er, sub-optimal?


Best would be if cabal was cygwin-friendlier.  Unfortunately, that seems 
to be a lot easier said that done :-(.


Now I have to see if 'poly' style *actually* works...
Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Literate refactoring?

2009-11-19 Thread Jacques Carette

I'd like to ask the cafe's advice on how to 'do this right'.

The situation is that we've got 2 literate Haskell programs which we 
would like to refactor.  In fact, this refactoring will ultimately unify 
these 2 separate programs into a single framework.  Every refactoring 
step is well-defined and induces a discrete modification.  These 
programs also come with tests of their own, which we would like to 
refactor along with the code.


In the end, we would like to obtain:
1. For each discrete step, full working programs, with tests and 
literate comments
2. A technical report which describes all the steps from #1 above, as a 
literate Haskell document.


For the TechReport, we don't need to be able to extract working code 
from it, but the code which is in it should be chunks of the working 
programs, not 'independent'.  Especially as some of the 'discrete steps' 
will result in dead ends, which we want to document, but not otherwise 
pursue.  In other words, the set of discrete programs do not form a 
total linear order, but is more like a linear order with a few short 
branches hanging off of it.


We were thinking that using some combination of scripts, 'patches', 
features from subversion and literate Haskell documents should allow us 
to do this.  In an ideal world, it would be nice if what we stored was 
only (generalized) patches, and we could generate #1 and #2 above from 
that.  [These patches would be generalized because they would contain 
descriptions of what the patch is about, as well as actual patches; 
though perhaps that description is really just a patch to the 
TechReport, we're not sure]. 

We were able to envision a number of different partial solutions, but 
somehow none of them 'came together' well enough to feel satisfactory.  
Instead of biasing the cafe's thinking by laying those pieces out, I 
figured I would lay out the high-level requirements, and let creative 
solutions come in.


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Finally tagless - stuck with implementation of lam

2009-10-15 Thread Jacques Carette

(sorry for the slow reply on this topic...)

Robert Atkey and Oleg presented some very interesting code in response 
to your query.  But some of you might (and should!) be asking why on 
earth did Jacques use unsafePerformIO?, especially when neither Robert 
nor Oleg did.


Simply put: I answered your question exactly as asked, while they 
answered the question you *should* have asked.  At the exact 'type' 
involved in your question, there is no good answer; if you want an 
instance of lam at a monadic type 'directly', you need to 'extract' from 
the monad.  But the point is that that isn't really the right question.  
Rather than requiring complete parametric polymorphism in the 
answer-type, if you allow a little non-uniformity through a simple type 
family, the shift in types is sufficient to no longer require a monadic 
'extract' at all.


But even once you've gotten to that point, that's not enough, because at 
that point you still have the issue of what calling convention (by 
value, name or need) to use.  And figuring that out is rather fun, so 
you got detailed answer from Robert and Oleg explaining that part in 
detail.  Robert carefully used IntT and :- to label the different 
types, and Oleg's code (http://okmij.org/ftp/tagless-final/CB.hs) showed 
how these were really just 'labels', which are useful mnemonics for 
humans, but not essential.


If you dig into our JFP paper describing finally tagless in all its 
gory details, you'll see that we use a poor man's version of type 
families there too (see section 4.3). 


Jacques


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] CBN, CBV, Lazy in the same final tagless framework

2009-10-15 Thread Jacques Carette
Just a short note to show how the 3 evaluation orders can be written in 
a very symmetric manner:


o...@okmij.org wrote these as:

In call-by-name, we have
lam f   = S . return $ (unS . f . S)

In call-by-value, we have
lam f   = S . return $ (\x - x = unS . f . S . return)

In call-by-need, we have
lam f   = S . return $ (\x - share x = unS . f . S)
  

These can be rewritten as
call-by-name (eta-expanded and application made visible):
   lam f = S . return $ (\x - unS . f . S  $ x)
call-by-value (flip)
   lam f = S . return $ (\x - unS . f . S . return = x)
call-by-need (flip)
   lam f = S . return $ (\x - unS . f . S = share x )

This pushes us to write two helper functions:
execS  :: (IO a - IO b) - IO a - IO b
execS g x = g = share x

execM :: Monad m = (m a - m b) - m a - m b
execM g x = g . return = x

And now, with the magic of slices, we can truly display those 3 in 
highly symmetric fashion:

call-by-name:
   lam f = S . return $ ((unS . f . S) $)
call-by-value (flip)
   lam f = S . return $ ((unS . f . S) `execM`)
call-by-need (flip)
   lam f = S . return $ ((unS . f . S ) `execS`)
(the redundant $ is left in to make the symmetry explicit)

And now we see the pattern:
   lam f = wrap . lift $ ((unwrap . f . wrap) `apply`)
where the names above are meant to be suggestive rather than 'actual' names.

Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Num instances for 2-dimensional types

2009-10-05 Thread Jacques Carette

Lennart Augustsson wrote:

Everyone agrees that the Haskell numeric hierarchy is flawed, but I've
yet to see a good replacement.
  
That's because the good replacement which is mathematically sound 
would be a real mess to work with -- for exactly the same reason that 
Functor , Applicative and Monad are unrelated classes.  See [1].


In a prototype language I am playing with, my version of Num depends on 
15 'previous' classes, and that's likely to increase, not decrease.


Jacques

[1] http://repetae.net/recent/out/classalias.html

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Finally tagless - stuck with implementation of lam

2009-10-05 Thread Jacques Carette
It's possible, but it's not nice.  You need to be able to get out of 
the monad to make the types match, i.e.

   lam f = I (return $ \x - let y = I (return x) in
 unsafePerformIO $ unI (f y))

The use of IO 'forces' lam to transform its effectful input into an even 
more effectful result.  Actually, same goes for any monad used inside a 
'repr'.


let_ and fix follow a similar pattern (although you can hide that 
somewhat by re-using lam if you wish).


Jacques

Günther Schmidt wrote:

Hi all,

I'm playing around with finally tagless.

Here is the class for my Syntax:


class HOAS repr where
lam :: (repr a - repr b) - repr (a - b)
app :: repr (a - b) - repr a - repr b
fix :: (repr a - repr a) - repr a
let_ :: repr a - (repr a - repr b) - repr b

int :: Int - repr Int
add :: repr Int - repr Int - repr Int
sub :: repr Int - repr Int - repr Int
mul :: repr Int - repr Int - repr Int


and here is one instance of that class for semantics.



newtype I a = I { unI :: IO a }

instance HOAS I where
app e1 e2 = I (do
e1' - unI e1
e2' - unI e2
return $ e1' e2')
int i = I (putStrLn (setting an integer:  ++ show i)  return i)
add e1 e2 = I (do
e1' - unI e1
e2' - unI e2
putStrLn (printf adding %d with %d e1' e2')
return $ e1' + e2')
sub e1 e2 = I (do
e1' - unI e1
e2' - unI e2
putStrLn (printf subtracting %d from %d e1' e2')
return $ e1' - e2')
mul e1 e2 = I (do
e1' - unI e1
e2' - unI e2
putStrLn (printf multiplying %d with %d e1' e2')
return $ e1' * e2')


I'm stuck with the lam method, for 2 days now I've been trying to 
get it right, but failed.


Is there a possibility that it isn't even possible?

Günther

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Finally tagless - stuck with implementation of lam

2009-10-05 Thread Jacques Carette

BTW, here is a more symmetric version of the same code:
   lam f = I . return $ unsafePerformIO . unI . f . I . return

which has a very clear pattern of
 lam f = embed $ extract . f. embed

where 'embed' is often called return.  Implementations of lam in 
tagless final style tend to follow the above pattern of embed/extract, 
each specialized to the particular repr implementation. 

In some cases, one only needs a context-sensitive extract, i.e. an 
extract which is only valid in the context of an outer 'embed'.  The 
most important example is code-generation in metaocaml where the 
'compiler' version of lam reads

 let lam f = .fun x - .~(f .x.).
where the .~ does an extract -- but *only* in the context of a 
surrounding code bracket (i.e. . . ). 

If there is such a context-sensitive extract for the IO monad, i.e. 
something which allows you to pretend you've got a result in IO only 
valid *inside* IO, then you can use that to write a nicer lam. 

I tried all the obvious things with monadic operators to get this done, 
but did not succeed in the time I had.  Maybe Oleg or Ken can.  So don't 
give up hope quite yet!


Jacques

Günther Schmidt wrote:

Hello Jacques,

thanks, that is disappointing in some way, guess I still have a lot to 
learn.


Günther


Am 05.10.2009, 18:06 Uhr, schrieb Jacques Carette care...@mcmaster.ca:

It's possible, but it's not nice.  You need to be able to get out of 
the monad to make the types match, i.e.

lam f = I (return $ \x - let y = I (return x) in
  unsafePerformIO $ unI (f y))

The use of IO 'forces' lam to transform its effectful input into an 
even more effectful result.  Actually, same goes for any monad used 
inside a 'repr'.


let_ and fix follow a similar pattern (although you can hide that 
somewhat by re-using lam if you wish).


Jacques


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Issue with IsFunction/Vspace in GHC 6.10.1

2009-04-02 Thread Jacques Carette
I was playing with some of Oleg's code (at end for convenience).  After 
minor adjustments for ghc 6.10.1, it still didn't work.  The error 
message is quite puzzling too, as it suggests adding exactly the 
constraint which is present...  Any ideas?


Jacques

-- Oleg's definition of a vector space class, based on IsFunction and
-- TypeCast.  See http://okmij.org/ftp/Haskell/isFunction.lhs
-- for the January 2003 message, which works in GHC 6.2.1 and 6.4
-- code below *works* in 6.8.1 AFAIK
{-# LANGUAGE EmptyDataDecls, MultiParamTypeClasses, 
FunctionalDependencies #-}

{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE UndecidableInstances #-}
{-# LANGUAGE OverlappingInstances #-}

module Q where

class Vspace a v | v - a
   where
   (+) :: v - v - v
   (*)  :: a - v - v

instance (IsFunction v f, Vspace' f a v) = Vspace a v
 where
 (+) = doplus (undefined::f)
 (*)  = dostar (undefined::f)

class Vspace' f a v | f v - a
   where
   doplus :: f - v - v - v
   dostar :: f - a - v - v

instance Num a = Vspace' HFalse a a where
doplus _ = (+)
dostar _  = (*)
-- etc.  No problem.

instance (IsFunction v f, Vspace' f a v, Vspace a v)
   = Vspace' HTrue a (c-v) where
doplus _ f g = \x - f x + g x
dostar _ a f x = a * (f x)


test1 = (1::Int) + 2
test2 = ((\x - x + (10::Int)) + (\x - x + (10::Int))) 1
test3 = ((\x y - x + y) + (\x y - (x + y) + x)) (1::Int) (2::Int)

test4 = ((\x y - x + y) + (\x y - ((2 * x) + (3 * y
   (1::Int) (2::Int)

data HTrue
data HFalse

class IsFunction a b | a - b
instance TypeCast f HTrue = IsFunction (x-y) f
instance TypeCast f HFalse = IsFunction a f

-- literally lifted from the HList library
class TypeCast   a b   | a - b, b-a   where typeCast   :: a - b
class TypeCast'  t a b | t a - b, t b - a where typeCast'  :: t-a-b
class TypeCast'' t a b | t a - b, t b - a where typeCast'' :: t-a-b
instance TypeCast'  () a b = TypeCast a b where typeCast x = typeCast' () x
instance TypeCast'' t a b = TypeCast' t a b where typeCast' = typeCast''
instance TypeCast'' () a a where typeCast'' _ x  = x


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Issue with IsFunction/Vspace in GHC 6.10.1

2009-04-02 Thread Jacques Carette
Claus Reinke wrote:
 {-# LANGUAGE ScopedTypeVariables #-}

 without, the 'f's in the instance are independent.
 Claus
Thanks - I discovered this (by trial-and-error) at about the same time
you sent the email. 

Is there a ticket somewhere to add a warning about this?  I expected me
'f's to be the same, and the error messages were not the least bit
enlightening.

Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Pattern combinators

2008-12-20 Thread Jacques Carette

Andrew Wagner wrote:
Wadler posted a blog entry the other day about a paper on 
pattern-matching in Haskell (http://wadler.blogspot.com/). I've taken 
a first stab at turning it into actual code for hackage 
(http://hpaste.org/13215). There are two commented-out definitions 
that don't type-check, though, and the types are too wild for me to 
grok. Anybody have any suggestions for 1.) How to fix it and/or 2.) 
How to use data/type/newtype to simplify the types and make it more 
manageable? Thanks!
Both errors are because you are using any instead of any'; you might 
wish to put

import Prelude hiding any
at the top of your code, just to avoid such confusion.

To make the types more readable (but not necessarily more manageable), I 
have made some changes to my version of this code.  For example, instead 
of () as the empty stack, I use

data Void
void = undefined :: Void
in the definition of 'zero' (and thus also in p .. k).  I also use
data FUnit = FUnit  -- Unit just for fail
Lastly, instead of having a matcher be a pair (which obscures the use of 
pairs as a stack in other places, as well as matching on pairs), I defined

data Matcher a b = Matcher a b
and use that everywhere.

This all makes the types larger to type, but at least there is a cleaner 
separation of concerns, which makes the errors easier to figure out.


The principal weakness of these pattern-matching combinators is that 
there is no support for algebraic types, i.e. things like

data Tree a = Leaf | Branch (Tree a) (Tree a)
I can see how to use Typeable to deal with that, but is there a simpler way?

Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Ambiguous type variable woes

2008-11-23 Thread Jacques Carette
I was trying to create a typeclass for an abstract Stack class, and ran 
into some problems.  The following 'works' fine:


{-# OPTIONS_GHC -XEmptyDataDecls -XFlexibleContexts 
-fno-monomorphism-restriction #-}

module Stack where

data Void

class Stack s where
   push_ :: s a r - b - s b (s a r)
   empty :: s () Void
   top   :: s a (s b r) - (a, s b r)
   first :: s a r - a

instance Stack (,) where
   push_ s a = (a,s)
   empty = ((),undefined::Void)
   top   = id
   first = fst

p = flip push_
test0 = top  . p 2 . p 3 $ empty

-- But the following doesn't - I get an Ambiguous type variable `s' in 
the contraint `Stack s' arising from the use of `first':

test1 = first . p 2 . p 3 $ empty
-- sure, that makes sense, it somehow needs to know what flavour of 
Stack to use even though (or perhaps because) the answer is independent 
of it.

-- So I listen to the probable fix and add a type signature:
test1 :: Stack (,) = Integer

-- This does not however help at all!  The only way I have found of 
'fixing' this requires annotating the code itself, which I most 
definitely do not want to do because I specifically want the code to be 
polymorphic in that way.  But GHC 6.8.2 does not want to let me do this.


What are my options?

Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Strange space leak

2008-07-16 Thread Jacques Carette

apfelmus wrote:

Grzegorz Chrupala wrote:


   split DOC . words . map toLower = (:[]) . words . map toLower

Since you converted everything to lowercase, the string DOC will 
never appear in the text, resulting in a single huge document.


Oops, that should have been obvious, sorry for the dumb question.
Thanks,


No problem, it was not obvious to me and I had fun trying to figure it 
out :)


Speaking of not obvious: Haskell's type system catches a lot of bugs -- 
but still gives no help with this particular 'problem'.  But one can 
easily imagine an extension to a type system which could have detected 
that DOC can never occur in the result of words . map toLower, and 
then with a bit more work [type-level Nat], the type of the full 
expression could have encoded that the result is always going to be of 
length 1.  That would surely have been a good hint that something 
non-trivial was going on.


Whether a Haskell-friendly type system extension could be 
created/implemented which would cover this example, I don't know.  
However, I have had a lot of fun with the underlying idea: anytime 
someone encounters a bug in their code (and relates the debugging story 
on haskell-cafe), try to imagine how the type system could be extended 
to automate that.  In most cases, I don't mean to have the type system 
reject the code, but rather to have an inferred type that would make it 
obvious that the code did not behave as expected.


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A question about algebra and dependent typing

2008-07-14 Thread Jacques Carette

Claus Reinke wrote:

You might find this interesting:

@inproceedings{ fokker95explaining,
author = Jeroen Fokker,
title = Explaining Algebraic Theory with Functional Programs,
booktitle = Functional Programming Languages in Education,
pages = 139-158,
year = 1995,
url = citeseer.ist.psu.edu/fokker95explaining.html }


Extremely interesting, thank you. It also contains a very interesting 
remark (on p. 2), noting that one should have

class Eq a = Monoid a where
(+) :: a - a - a
zero ::a

And states Because in the lawsŠ the notion of equality is used, we 
have made Monoid a subclass of Eq. In hindsight, this is obvious, but 
is naively tempting to define a monoid without this constraint. And, 
indeed, Data.Monoid defines a Monoid class without an Eq constraint!


So, what are monoids without equality in Haskell? There are rather 
interesting beasts: they are monoids for which we have a computable 0 
(mempty) and a computable addition (mappend), but for which we cannot 
*witness* equality of the underlying carrier 'set'. This, for example, 
is the only way one can legitimately get an instance such as

instance Monoid b = Monoid (a-b)

While Haskell is used quite a lot for doing computable (or 
'extensional') mathematics, intensionality sneaks in here and there -- 
and this is just one such example.


I must admit, I actually don't yet know whether I prefer classical 
monoids or these generalized monoids where the laws are _purely_ 
intensional. Opinions?


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] PLMMS - Last call for papers

2008-04-23 Thread Jacques Carette
 of the Journal of Automated Reasoning.
There will be a separate submission and review phase for this, where
papers from both PLMMS 2007 and 2008 will be considered.


Programme Committee
---

  Jacques Carette (Co-Chair) (McMaster University, Canada)
  John Harrison  (Intel Corporation, USA)
  Hugo Herbelin  (INRIA, Ecole polytechnique, France)
  James McKinna  (Radboud University Nijmegen, Netherlands)
  Ulf Norell (Chalmers University, Sweden)
  Bill Page
  Christophe Raffalli(Universite de Savoie, France)
  Josef Urban(Charles University, Czech Republic)
  Stephen Watt   (ORCCA, University of Western Ontario, Canada)
  Makarius Wenzel (Co-Chair) (Technische Universitaet Muenchen, Germany)
  Freek Wiedijk  (Radboud University Nijmegen, Netherlands)


Important Dates
---

 * Submission deadline - 5 May 2008
 * Notification of acceptance - 6 June 2008
 * Final version - 7 July 2008 (approximately)
 * Workshop - 28-29 July 2008

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Longest increasing subsequence

2008-04-10 Thread Jacques Carette
You can translate the following algorithm (written in Maple 11), which 
can be made purely functional.  [For the cognoscenti: attributes are 
implemented in-place in Maple, but that is really just an instance of 
the Decorator pattern which can be just as easily implemented with a 
functional Map].  Note that all the assignments below are really just 
let bindings.


Jacques

longestIncreasingSequence := proc(L::list)
local n,i,j,G;
uses GraphTheory;
   n := nops(L);
   G := Digraph(n
, {seq(seq(`if`(L[i]L[j]
, [i,j]
, NULL
   )
   , j = i+1..n)
   , i = 1..n-1)}
   );
   map(op,LongestPathOfDAG(G),L);
end proc:
#
LongestPathOfDAG := proc(G)
local len, sources, v;
uses GraphTheory;
   len := proc(v)
   option cache;
   local j,dep,longest;
   uses GraphTheory;
   dep := Departures(G,v);
   if dep = [] then
   setattribute(Float(1),v);
   else
   longest := max(seq(procname(j), j in dep));
   setattribute(1 + longest, v, attributes(longest));
   end if;
   end proc;
   sources := select(v - Arrivals(G,v)=[], Vertices(G));
   [attributes(max(seq(len(v), v in sources)))];
end proc:
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] 2nd CFP: PLMMS 2008

2008-03-05 Thread Jacques Carette
 that have been completed into full papers,
to appear in a special issue of the Journal of Automated Reasoning.
There will be a separate submission and review phase for this, where
papers from both PLMMS 2007 and 2008 will be considered.


Programme Committee
---

  Jacques Carette (Co-Chair) (McMaster University, Canada)
  John Harrison  (Intel Corporation, USA)
  Hugo Herbelin  (INRIA, Ecole polytechnique, France)
  James McKinna  (Radboud University Nijmegen, Netherlands)
  Ulf Norell (Chalmers University, Sweden)
  Bill Page
  Christophe Raffalli(Universite de Savoie, France)
  Josef Urban(Charles University, Czech Republic)
  Stephen Watt   (ORCCA, University of Western Ontario, Canada)
  Makarius Wenzel (Co-Chair) (Technische Universitaet Muenchen, Germany)
  Freek Wiedijk  (Radboud University Nijmegen, Netherlands)


Important Dates
---

 * Submission deadline - 5 May 2008
 * Notification of acceptance - 6 June 2008
 * Final version - 7 July 2008 (approximately)
 * Workshop - 28-29 July 2008

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Graphical graph reduction

2008-02-24 Thread Jacques Carette

I think HOPS is what you are looking for
http://www.cas.mcmaster.ca/~kahl/HOPS/

It may advertize itself otherwise, but the main thing you 'see' when 
running programs in fully explicit mode is exactly all the graph reductions.


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] CFP - PLMMS 2008

2008-01-31 Thread Jacques Carette


FIRST CALL FOR PAPERS

  Second Workshop on
   Programming Languages for Mechanized Mathematics
(PLMMS 2008)

http://events.cs.bham.ac.uk/cicm08/workshops/plmms/

  As part of CICM / Calculemus 2008
   Birmingham, UK, 28-29 July 2008


This workshop is focused on the intersection of programming languages
(PL) and mechanized mathematics systems (MMS). The latter category
subsumes present-day computer algebra systems (CAS), interactive proof
assistants (PA), and automated theorem provers (ATP), all heading
towards fully integrated mechanized mathematical assistants that are
expected to emerge eventually (cf. the objective of Calculemus).

The two subjects of PL and MMS meet in the following topics, which are
of particular interest to this workshop:

* Dedicated input languages for MMS: covers all aspects of languages
  intended for the user to deploy or extend the system, both
  algorithmic and declarative ones. Typical examples are tactic
  definition languages such as Ltac in Coq, mathematical proof
  languages as in Mizar or Isar, or specialized programming
  languages built into CA systems. Of particular interest are the
  semantics of those languages, especially when current ones are
  untyped.

* Mathematical modeling languages used for programming: covers the
  relation of logical descriptions vs. algorithmic content. For
  instance the logic of ACL2 extends a version of Lisp, that of Coq
  is close to Haskell, and some portions of HOL are similar to ML
  and Haskell, while Maple tries to do both simultaneously. Such
  mathematical languages offer rich specification capabilities,
  which are rarely available in regular programming languages. How
  can programming benefit from mathematical concepts, without
  limiting mathematics to the computational worldview?

* Programming languages with mathematical specifications: covers
  advanced mathematical concepts in programming languages that
  improve the expressive power of functional specifications, type
  systems, module systems etc. Programming languages with dependent
  types are of particular interest here, as is intentionality vs
  extensionality.

* Language elements for program verification: covers specific means
  built into a language to facilitate correctness proofs using MMS.
  For example, logical annotations within programs may be turned
  into verification conditions to be solved in a proof assistant
  eventually. How need MMS and PL to be improved to make this work
  conveniently and in a mathematically appealing way?

These issues have a very colorful history. Many PL innovations first
appeared in either CA or proof systems first, before migrating into
more mainstream programming languages. Some examples include type
inference, dependent types, generics, term-rewriting, first-class
types, first-class expressions, first-class modules, code extraction
etc. However, such innovations were never aggressively pursued by
builders of MMS, but often reconstructed by programming language
researchers. This workshop is an opportunity to present the latest
innovations in MMS design that may be relevant to future programming
languages, or conversely novel PL principles that improve upon
implementation and deployment of MMS.

We also want to critically examine what has worked, and what has not.
Why are all the languages of mainstream CA systems untyped? Why are the
(strongly typed) proof assistants so much harder to use than a typical
CAS? What forms of polymorphism exist in mathematics? What forms of
dependent types may be used in mathematical modeling? How can MMS
regain the upper hand on issues of genericity and modularity? What
are the biggest barriers to using a more mainstream language as a host
language for a CAS or PA/ATP?


Submission
--

Submission works through EasyChair
http://www.easychair.org/conferences/?conf=plmms2008

Two kinds of papers will be considered:

* Full research papers may be up to 12 pages long. Authors of
 accepted papers are expected to present their work on the workshop
 in a regular talk.

* Position papers may be up to 4 pages long. The workshop
 presentation of accepted position papers consists of two parts: a
 stimulating statement of certain issues or challenges by the
 author, followed by a discussion in the plenum.

Papers should use the usual ENTCS style http://www.entcs.org/prelim.html
(11 point version), and will be reviewed by the program
committee. Informal workshop proceedings will be circulated as a
technical report. We also plan post-workshop proceedings of improved
research papers, or position papers that have been completed into full
papers, as a special issue in a journal; papers from both PLMMS 2007
and 2008 will be considered here (details to follow).


Programme Committee
---

 Jacques Carette (Co-Chair) (McMaster University, Canada)
 John Harrison

Re: [Haskell-cafe] Somewhat random history question - chicken and egg

2007-11-13 Thread Jacques Carette

[EMAIL PROTECTED] wrote:

But... tell me please, ANYONE, who takes part in this inspiring exchange:
How many COBOL programs have you written in your life?
How many programs in Cobol have you actually SEEN?


Shudder.  In '86, I had to modify a COBOL code generator, *written in 
COBOL*.  The generated program read some data from a database and 
printed out zillions of reports; the generated program took more than 24 
hours to run, so rarely ever completed successfully, as the machines 
they ran it on was not all that reliable.  So I had to modify the 
generator to generate 'check-pointing' code, so the reporting program 
could be restarted from the last checkpoint rather than from the start.  
Report generation  was something that COBOL was rather good at; code 
generation was an entirely different matter!  That first exposure to 
(untyped) code-generation probably explains why I'm such a big fan of 
metaocaml these days...


That year, I had way more fun figuring out how call-by-name worked in 
Algol 60, by starting at the generated thunks in the Burroughs 6800 
assembly.  Nice thing about that machine was that it was a pure stack 
machine - kind of like the JVM, as a matter of fact.  The other nice 
thing about that machine is that they had completely bootstrapped it, so 
that there was only an Algol compiler for it, no user-level assembler at 
all [but a disassembler for debugging].  It had been bootstrapped 
several years back, or so I was told.


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: Math.OEIS 0.1

2007-10-25 Thread Jacques Carette

Carl Witty wrote:
On Mon, 2007-10-22 at 10:54 -0400, Brent Yorgey wrote:
The Online Encyclopedia of Integer Sequences should really

contain more
Haskell code for describing the sequences. 


Agreed!  I propose an OEIS party where we all sit around one day and
submit Haskell code. =)

(I'm only half joking...)
  


See
http://www.sagemath.org/hg/sage-main/file/cf44d55e626b/sage/combinat/sloane_functions.py
 for the very beginnings of an effort to write OEIS code in Python (for the 
SAGE computer algebra system).  (It looks like currently 130 sequences are 
implemented.)  Maybe this would be useful as a starting point.
  



This is a bit of a waste of human time, especially since automated 
methods (like gfun [1]) can automatically find closed-forms for about 
1/3 of the sequences on OEIS.  Generating code from the output of gfun 
is an easy task, and leads to rather pretty results [2], in my 
completely biased opinion.  Very very few of the sequences outside of 
that 1/3 has known programs associated with them.


Now, using many of Jerzy Karczmarczuk's ideas [3] and Haskell packages, 
one could rewrite gfun in Haskell, and the end result would likely be 
extremely elegant, although making it efficient might be rather more 
challenging.


Jacques

[1] http://portal.acm.org/citation.cfm?id=178368 and 
http://algo.inria.fr/libraries/

[2] http://www.cas.mcmaster.ca/~carette/publications/CaretteJanicki.pdf
[3] http://users.info.unicaen.fr/~karczma/arpap/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How odd...

2007-08-11 Thread Jacques Carette

[Sorry for the long quote, but context is important]

Dan Piponi wrote:

It's fairly standard practice, when documenting functions of a complex
variable, to specify precisely which 'branch cuts' are being used.
Here's a quote from the Mathematica documentation describing their Log
function: Log[z] has a branch cut discontinuity in the complex z
plane running from  -infinity to 0.

  

With this interpretation, (-1)^(1/3) = 0.5 + sqrt(3)/2 * i.  If you go with
the real solution (-1) you might need to do so carefully in order to
preserve other useful properties of ^, like continuity.



You can guarantee this by making sure you make the right 'cuts'.
  


If only it were that simple.  Up until rather recently, it was not even 
known if there was a set of right cuts for ln and all the arc-trig 
functions that would work together properly.  See
`According to Abramowitz and Stegun or arccoth needn't be uncouth', by 
Corless, Jeffrey, Watt and Davenport, available from

http://citeseer.ist.psu.edu/corless00according.html
or
http://www.apmaths.uwo.ca/~djeffrey/Offprints/couth.pdf

This is by no means a solved problem.  For example, there is still no 
consensus as to where the branch cuts for the various versions of the 
LegendreQ function should go.


Jacques
http://www.apmaths.uwo.ca/%7Edjeffrey/Offprints/couth.pdf
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: [Haskell] View patterns in GHC: Request forfeedback

2007-07-27 Thread Jacques Carette
Others have already pointed this out, but it is worth saying again: 
Maybe is not the only monadic effect which makes sense during 
pattern-matching.  Wolfram Kahl and I have explored some of these things 
as part of the Pattern Matching Calculus,

http://sqrl.mcmaster.ca/~kahl/PMC/
[If you want to jump to the most recent, most complete version, see
http://www.cas.mcmaster.ca/~kahl/Publications/TR/Kahl-Carette-Ji-2006b/]

Various other monads can be used for pattern-matching-effects.  While 
Maybe is quite classical, List and LogicT give extremely useful 
alternatives.


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Mathematica

2007-06-12 Thread Jacques Carette

Jon Harrop wrote:

On Tuesday 12 June 2007 18:41:34 Tomasz Zielonka wrote:
  

On Sun, May 13, 2007 at 02:19:55PM +0100, Andrew Coppin wrote:


Writing *insanely* efficient number chrunking software requires a deep
understanding of the target architecture, and lots of playing with very
low-level constructs. Assembly is really the only language you can do it
with; even C is probably too high-level.
  


Just reuse existing libraries: BLAS, LAPACK, ATLAS, FFTW. The free libraries 
that I have benchmarked were all much faster than Mathematica. FFTW was 4x 
faster on average, for example.
  


You must have done benchmarks with very low numbers of precision, which 
is really boring and for which Mathematica is overkill.  If you are 
interested in numerical computations that require arbitrary precision, 
then Mathematica (I have to admit) is seriously good:

http://www.cs.ru.nl/~milad/manydigits/results.php
Note how, for all the 'easy' problems, it beats every competitor 
hands-down.  And how they did not even given entries for any of the 
'hard' ones!


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Implementing Mathematica

2007-06-01 Thread Jacques Carette

Andrew Coppin wrote:

Lennart Augustsson wrote:

Why do you seem so in awe of Mathematica?


Oh, well, I guess it is only the most powerful maths software ever 
written... no biggie.
No, it is one of several.  In very little time I can find 20 things that 
Maple does better than Mathematica.  In the same amount of time, I can 
find 20 things that Mathematica does better than Maple. 

[Actually, the most obvious is that its marketing is miles better; so 
good that it makes blind evangelists out of people who have not even 
tried the competitors].


It's just another language with a good set of libraries.  Claims that 
it is the best, fastest, etc comes from Wolfram advertising, no 
doubt. :)


The claim that it is the fastest clearly doesn't hold (much to my 
surprise). The claim that it is the most powerful, well... I have yet 
to see anything that can come close to the symbolic power of Mathematica.

Give Maple a try.  For example, you'll find that:
1) Maple's DE solver beats Mathematica hands-down
2) Mathematica's definite integrator beats Maples hands-down
3) Maple's symbolic non-linear equation solver is best
4) Mathematica's definite summation (ie finding closed forms) is best
and on and on.  [I don't know enough about the other systems to make 
similar comparison lists].


You got suckered by their marketing.  Get your head out of the sand, and 
take a good look around what is available.


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Implementing Mathematica

2007-05-31 Thread Jacques Carette


Jon Harrop wrote:
Arbitrary precision integers are quite a performance burden and they 
are rarely used. I would not expect a language that is trying to be 
efficient to impose arbitrary precision integers (or floats).


Apparently you have looked inside a computer *algebra* system, one that 
does *exact* computations (i.e. on polynomials with exact coefficients, 
not floats/doubles).  Arbitrary precision integers are not a 'burden', 
they are an absolute necessity.  Algorithms on polynomials will almost 
inevitably produce 'coefficient growth'.  Even things as simple as 
sub-resultant computations (for computing extended GCDs) have this 
problem.  And this is not a fluke or a problem with state-of-the-art, 
there are known cases where this is inevitable.


Like Jerzy, I wonder why I get suckered in to these conversations.  I 
guess we both have this silly need to set the record straight.


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] boilerplate boilerplate

2007-05-22 Thread Jacques Carette

Personally I would like
{-# OPTIONS -fglasgow-exts -fgenerics -}
module Blog.Types where

family Usual = (Eq, Ord, Read, Show, Typeable)

data BlogEntry = Entry EpochSeconds Name Email Title Body  deriving Usual
newtype Name = Name String   deriving Usual
newtype Title = Title String deriving Usual
newtype Body = Body String deriving Usual

Of course, if you're doing that, you might as well change those last 3 
lines with

newtype-schema NamedString = NamedString String deriving Usual
instantiate NamedString with Name, Title, Body

To me, that is very clear.  Syntax-wise, I am sure things can be 
improved (but I rather like 'family', 'foo-schema' and 'instantiate', 
but not really the 'with').


Jacques

Alex Jacobson wrote:
Consider this module for a blog entry that I will want to put in 
various generic collections that require Ord


  {-# OPTIONS -fglasgow-exts #-}
  module Blog.Types where
  import Data.Typeable
  import Data.Generics

  data BlogEntry = Entry EpochSeconds Name Email Title Body 
   deriving (Eq,Ord,Read,Show,Typeable)


  newtype Name = Name String deriving (Eq,Ord,Read,Show,Typeable)
  newtype Title = Title String deriving (Eq,Ord,Read,Show,Typeable)
  newtype Body = Body String deriving (Eq,Ord,Read,Show,Typeable)


It seems really unnecessarily verbose.  Having to add the OPTION 
header AND import Data.Typeable and Data.Generics just to derive 
Typeable is a beat-down.  It is even more of a beat-down to have to 
add a deriving clause for every newtype to make this all work nicely.  
Is there a way to make all types automatically derive everything 
unless there is an explicit instance declaration otherwise?


  {-# OPTIONS -fglasgow-exts -fgenerics -fderiving#-}
  module Blog.Types where

  data BlogEntry = Entry EpochSeconds Name Email Title Body 
  newtype Name = Name String   newtype Title = Title String   newtype 
Body = Body String

Isn't that much nicer?

-Alex-


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Programming Languages and Mechanised Mathematics -- deadline extension

2007-04-25 Thread Jacques Carette
 of a Journal (details to follow).


Important Dates

May 02, 2007: Submission Deadline (Extended!)
May 30, 2007: Notification
June 29-30, 2007: Workshop


Program Committee

Lennart Augustsson [Credit Suisse]
Wieb Bosma [Radboud University Nijmegen, Netherlands]
Jacques Carette (co-Chair) [McMaster University, Canada]
David Delahaye [CNAM, France]
Jean-Christophe Filliâtre [CNRS and Université de Paris-Sud, France]
John Harrison [Intel Corporation, USA]
Josef Urban [Charles University, Czech Republic]
Markus (Makarius) Wenzel [Technische Universität München, Germany]
Freek Wiedijk (co-Chair) [Radboud University Nijmegen, Netherlands]
Wolfgang Windsteiger [University of Linz, Austria]


  Location and Registration

Location and registration information can be found on the Calculemus
http://www.risc.uni-linz.ac.at/about/conferences/Calculemus2007/ web site.

-
[1] by popular we mean  1 million users.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is Template Haskell a suitable macro language?

2007-04-24 Thread Jacques Carette

Magnus Jonsson wrote:
I have the same problem too when using Haskell. The more I try to 
enforce static guarantees the more I get lots of datatypes that are 
similar except for one or two constructors. The best way I have found 
to avoid this is to simply give up on some of the static guarantees 
and just use one datatype that contains all the constructors. Less 
static guarantees but also less needless type coaxing between 90% 
similar types. I haven't tried using macros.
In Ocaml, you can frequently use polymorphic variants to get the same 
effect.


Which means that if you are willing to do enough type-class-hackery, it 
should, in principle, be possible to do the same in Haskell.  But it 
sure isn't as convenient!


This all points to some clear need for more ``flavours'' of polymorphism 
being needed (in Haskell), to be able to express *in the type system* 
what TH allows you to say outside.


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is Template Haskell a suitable macro language?

2007-04-24 Thread Jacques Carette

Josef Svenningsson wrote:

On 4/24/07, Jacques Carette [EMAIL PROTECTED] wrote:

In Ocaml, you can frequently use polymorphic variants to get the same
effect.

Which means that if you are willing to do enough type-class-hackery, it
should, in principle, be possible to do the same in Haskell.  But it
sure isn't as convenient!


You seem to imply that there is an encoding of polymorphic variants in
Haskell using type classes. While I know that it's possible to achieve
similar effects using type classes I haven't seen a direct encoding.
If there is such an encoding I would be very interested to hear about
it.

As usual, look for a solution from Oleg:
http://www.haskell.org/pipermail/haskell/2006-July/018172.html

There was also a proposal by Koji Kagawa (published at Haskell '06)
http://portal.acm.org/citation.cfm?id=1159842.1159848coll=dl=ACMtype=seriesidx=1159842part=ProceedingsWantType=Proceedingstitle=HaskellCFID=15151515CFTOKEN=6184618

Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Unresolved overloading error

2007-03-31 Thread Jacques Carette

Bryan Burgers wrote:

On 3/31/07, Scott Brown [EMAIL PROTECTED] wrote:


It's working now, thank you.
I changed the definition to

 binom n j = div (fac n) ((fac j)*(fac (n - j)))

 bernoulli n p j = fromIntegral(binom n j)*(p ^ j) * ((1 - p)^(n - j))


As a matter of style suggestion, it might make 'binom' more clear if
you use 'div' as an infix operator:


binom n j = (fac n) `div` ( fac j * fac (n - j) )
And as a matter of efficiency, no one would write binom using factorial, 
but would rather write at least

binom u k = (product [(u-i+1)  | i - [1..k]]) `div` (product [1..k])
and even better would be to do it this way
-- bb u k = toInteger $ product [ (u-i+1) / i | i - [1..k]]
but that does not type (for a good reason).  The issue is that it is 
possible to prove that the above is an integer, but the compiler can't 
see that :-(

That can be done as

import Data.Ratio
bb u k = numerator $ product [ (u-i+1) / i | i - [1..k]]

Of course, the above is fast if and only if the gcd operation in 
Data.Ratio has been well optimized.


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why the Prelude must die

2007-03-25 Thread Jacques Carette

Vivian McPhail wrote:

What I want to push is a 'mathematically sound' numeric prelude.  A proper
numerical prelude should have bona fide mathematical obects like groups,
rings, and fields underlying common numerical classes.  It would be edifying
to the student who discovered that the particular data type he is using is
an inhabitant of a known class and can thus take advantage of known
properties, presupplied as class methods.  Reasoning and communication about
programs, data types, and functions would be enhanced.
  
Yes, except that there are not as many groups, rings and fields out 
there as you think.  For one thing, Float and Double are not in any of 
those.  The best you can do with Float and Double is 
NonAssociativeAlgebra, which is unlikely to be 'edifying'.  At least 
Integer is a proper ring.  There is only one field that I am aware of in 
Haskell (Rational Integer).  [Which is sad since Z_p is so easy to code 
up, and generally extremely useful].


Some classes would become even more important: monoid, groupoid, 
semi-group, loop (semi-group with identity), etc.  But all of those are, 
to the average programmer (and many a mathematician), just as scary as 
Monad.


And personally, the place my semi-groups show up are not in my 
data-types, but in my *types*.



Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Mathematics in Haskell Re: [Haskell-cafe] Why the Prelude must die

2007-03-25 Thread Jacques Carette
My main worry is that this can be a horrible time sink without a clear 
path to success and, worse, known obstacles in the way.


For example: we know that all Monads are Functors, but that is not 
expressed in the type classes, because it would be too much a pain to do 
so.  That pain is heavily multiplied in a 'proper' subdivision of 
mathematical types.  Until that is fixed, I know I do not consider it 
worthwhile to spend a lot of time working on this because the end result 
might be correct, but completely unrealistic.


Another obvious item is that the lessons from DoCon have not all been 
reflected into advances in Haskell, not even Haskell'.  If I was in a 
pessimistic mood, I would mention the lessons from AUTOMATH (you know, 
stuff from 1968 to the mid 70s).


Claus Reinke wrote:
ok, tongue out of cheek again, someone would need to investigate the 
particular shape in which such abstract concepts would be most useful

(which may or may not be the same shape best known in maths, see
Monad, Monoid, Applicative, Arrow,..), and how those shapes are best 
expressed within the programming language abilities. and then, of 
course, someone would need to write code and tutorials.
Agreed.  This is why I mentioned semi-groups when someone talked about 
classical structures like rings and fields.  To pick just one example, 
Kleene Algebras seem more applicable to CS than say fields.  Similarly, 
there are more rngs and rigs than rings.  More applications for 
non-archimedian fields (power series) than archimedean ones (reals).  
And so on.


perhaps i was mistaken in thinking that there is a group of 
math-interested

haskellers out there discussing, developing, and documenting the area? or
perhaps that group needs introductory tutorials presenting its work?
My guess is that there are a number of people waiting in the wings, 
waiting for a critical mass of features to show up before really diving 
in.  See

http://www.cas.mcmaster.ca/plmms07/
for my reasons for being both interested and wary).

Probably the simplest test case is the difficulties that people are 
(still) encountering doing matrix/vector algebra in Haskell.  One either 
quickly encounters efficiency issues (although PArr might help), or 
typing issues (though many tricks are known, but not necessarily 
simple).  Blitz++ and the STL contributed heavily to C++ being taken 
seriously by people in the scientific computation community.  Haskell 
has even more _potential_, but it is definitely unrealised potential.


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Two Other AI Sources

2007-03-20 Thread Jacques Carette

Alfonso Acosta wrote:

Just in case it helps, I ported the code of Norvigs Paradigms of
Artificial Intelligence Programming chapter 8 (integrals and
derivates) for a collage course.

A link to this code would have been / would still be most appreciated.

Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] CFP: Prog. Lang for Mechanized Mathematics Workshop

2007-03-12 Thread Jacques Carette
 as a special issue of a Journal (details to follow).



   Important Dates

April 25, 2007: Submission Deadline
June 29-30, 2007: Workshop


   Program Committee

Lennart Augustsson http://www.cs.chalmers.se/%7Eaugustss [Credit Suisse]
Wieb Bosma http://www.math.ru.nl/%7Ebosma/[Radboud University 
Nijmegen, Netherlands]
Jacques Carette http://www.cas.mcmaster.ca/%7Ecarette (co-Chair) 
[McMaster University, Canada]

David Delahaye http://cedric.cnam.fr/%7Edelahaye/ [CNAM, France]
Jean-Christophe Filliâtre http://www.lri.fr/%7Efilliatr/ [CNRS and 
Université de Paris-Sud, France]

John Harrison http://www.cl.cam.ac.uk/%7Ejrh13/ [Intel Corporation, USA]
Markus (Makarius) Wenzel http://www4.in.tum.de/%7Ewenzelm/ [Technische 
Universität München, Germany]
Freek Wiedijk http://www.cs.ru.nl/%7Efreek/ (co-Chair) [Radboud 
University Nijmegen, Netherlands]
Wolfgang Windsteiger http://www.risc.uni-linz.ac.at/people/wwindste/ 
[University of Linz, Austria]



   Location and Registration

Location and registration information can be found on the Calculemus 
http://www.risc.uni-linz.ac.at/about/conferences/Calculemus2007/ web 
site.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Declaring variance of class parameters?

2007-02-27 Thread Jacques Carette

If I have a class, say
class Symantics repr where
 int:: Int - repr Int

and so on, I would like to *require* that 'repr' be covariant.  Is there 
any way to do that?


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] What inhabits this type: (forall a. a - b) - (forall a. m a - m b)

2007-02-27 Thread Jacques Carette

Since my last query was answered so quickly, let's try another.

I have looked on Hoogle.  I would have asked Djinn, but I don't have it 
around.  So, can someone find a term that inhabits

(forall a. a - b) - (forall a. m a - m b)
?  I think of this as the type of functions that, given a function from 
any boxed-up a to a b, will give me a function from a boxed-up ma to a m 
b -- m does not have to be a Monad!.


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is Excel the most used -- and fucntional -- programming lanuage on Earth?

2007-01-30 Thread Jacques Carette
There is a Maple plug-in for Excel.  If you have Maple (on Windows), 
just start Excel and you'll see extra buttons.


This allows you to have cells containing symbols, as well as access to 
all of Maple's functions.  This easily gets you a (very impure!) 
higher-order functional language inter-operating very closely with Excel.


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Channel9 Interview: Software Composability and the Future of Languages

2007-01-27 Thread Jacques Carette

Tim Newsham wrote:

 I have to write:

   do {
  x - getSomeNum
  y - anotherWayToGetANum
  return (x + y)
   }

even if the computation of x and y are completely independant of
each other.


I too have really missed a parallel composition operator to do 
something like the above.  Something like


do {
   { x - getSomeNum || y - anotherWayToGetANum}
   return (x+y)
}

Actually, that syntax is rather hideous.  What I would _really_ like to 
write is

do {
   (x,y) - getSomeNum || anotherWayToGetANum
   return (x+y)
}

I would be happy to tell Haskell explicitly that my computations are 
independent (like the above), to expose parallelization opportunities.  
Right now, not only can I NOT do that, I am forced to do the exact 
opposite, and FORCE sequentiality.


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Stronger static types, was Re: [Haskell-cafe] Re: Versioning

2006-12-22 Thread Jacques Carette

Brian Hulley wrote:
Would it be possible to augment Haskell's type system so that it was 
the same as that used in Epigram?


No, no!  Epigram is a wonderfully pure research experiment in one corner 
of the design space.  The corner it is exploring is not particularly 
Haskell like, though the results of the exploration should bear fruit 
for Haskell now and then [and it already has].


While I am quite sure Haskell could do with more information in its 
types, proof requirements cannot be anywhere close to what it is in 
Epigram.  I am convinced there is a Haskell compatible middle road.



Also, typing is not the only issue for compile time guarantees. Consider:

[Example of coding enumeration as pattern-match vs guards vs ... deleted]

This is much more of an engineering issue than a theoretical issue.  In 
some cases (explicit pattern match with no guards), coverage checking is 
trivial because the language you are dealing with is so simple.  In 
other cases (guards), in general the problem is undecidable, but there 
are many many particular cases where there are applicable decision 
procedures.  It seems to be a common choice amongst compiler writers to 
not wade into these waters -- although the people doing static analysis 
have been swimming there for quite some time.


My feeling is that slowly increasing amounts of static analysis will be 
done by compilers (beyond just types or the current strictness 
analyses) to include these kinds of total/partial checks on guards, 
shape analysis, etc.  It is already happening, the one question is 
what will be the speed at which it happens in Haskell.  Maybe it is time 
for ICFP and SAS to be held together.


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Versioning

2006-12-21 Thread Jacques Carette

Lennart Augustsson wrote:
I must second this opinion.  There's this (false) perception that you 
need all
kinds of extensions to make Haskell usable.  It's simply not true.  
Certain

extensions can make your life easier, but that's it.


To write code in Haskell, this is true.

However, one of the wonderful things about Haskell is how much the type 
system helps you.  And if you want the type system to help you even more 
[which, as a programmer having suffered from dynamic typing too long, I 
really want], those extensions are really needed.


In other words, you can program in Haskell just fine without 
extensions.  But if you want that next level in type safety, 
extensions is where it's at, at least for the kind of code I write.


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Versioning

2006-12-21 Thread Jacques Carette

Neil Mitchell wrote:

In other words, you can program in Haskell just fine without
extensions.  But if you want that next level in type safety,
extensions is where it's at, at least for the kind of code I write.


What level of safety do these type extensions give you? 
Check out many, many, many of Oleg's postings to the Haskell list.  His 
code expresses this much better than my words can.



The biggest
runtime crasher is probably pattern match failings, something that
most of these type extensions don't catch at all!
Array out-of-bounds, fromJust, head on an empty list, and pattern-match 
failures are in my list of things I wish the type system could help me 
with.  And sometimes it can [again, see Oleg's posts].  But is 
definitely where I am *eager* to see developments.



They do give you some safety, but not a massive amount, and not in the
places where it would be truely useful.

Unfortunately, I agree.  But I'll still take what I can get!

Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Stronger static types, was Re: [Haskell-cafe] Re: Versioning

2006-12-21 Thread Jacques Carette
Yes, dependent types have a lot to do with all this.  And I am an eager 
lurker of all this Epigram.


Scott Brickner wrote:

Jacques Carette wrote:
Array out-of-bounds, fromJust, head on an empty list, and 
pattern-match failures are in my list of things I wish the type 
system could help me with.  And sometimes it can [again, see Oleg's 
posts].  But is definitely where I am *eager* to see developments.
I agree with you, though - I'm very eager to see further developments 
along these lines. It's the main reason I started learning Haskell, 
and I'm absolutely convinced that functional programming and this kind 
of increasingly strong typing are the way of the future for programming.
What must be remembered is that full dependent types are NOT needed to 
get a lot of the benefits of dependent-like types.  This is what some of 
Oleg's type gymnastics shows (and others too).  My interest right now 
lies in figuring out exactly how much can be done statically. 

For example, if one had decent naturals at the type level (ie naturals 
encoded not-in-unary) with efficient arithmetic AND a few standard 
decision procedures (for linear equalities and inequalities say), then 
most of the things that people currently claim need dependent types are 
either decidable or have very strong heuristics that work [1].


Jacques

[1]
@book{SymbolicAnalysis,
 author= {Thomas Fahringer and
  Bernhard Scholz},
 title = {Advanced Symbolic Analysis for Compilers: New Techniques
  and Algorithms for Symbolic Program Analysis and 
Optimization},

 publisher = pub-SV,
 series= {Lecture Notes in Computer Science},
 volume= {2628},
 year  = {2003},
 isbn  = {3-540-01185-4}
}
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] AT solution: rebinding = for restricted monads

2006-12-19 Thread Jacques Carette

David Roundy wrote:

The trouble is that your solution doesn't allow you to use do-notation with
the IxMonad.  And if you did allow yourself to use do-notation by rebinding
(=), etc, then you wouldn't be able to use ordinary monads with
do-notation in the same module.  That's what makes things tricky, since an
IxMonad is different-kinded from Monad, so you can't make a monad an
instance of IxMonad.
  

Seems to me that this screams for camlp4.  Oops, wrong language ;-)

But seriously, this kind of thing seems to arise often enough that 
having a standard method for doing syntax extensions for Haskell seems 
like a good idea.


And as far as making Monad instances for IxMonad, this is where partial 
application at the class level would come in rather handy.  Seems to be 
that (at least) IxMonad m () () should be a Monad.


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] On the various faces of Composition

2006-12-19 Thread Jacques Carette
[As recently seen on LtU - http://lambda-the-ultimate.org/node/1919] An 
amazing paper

http://www.dbmcintyre.pwp.blueyonder.co.uk/index_f/menu_f/j_f/apl95.pdf
on composition as seen from APL and J.  The terminology (using verbs, 
nouns, adverbs, etc) is _so_ much more evocative than the terminology 
used for similar ideas here, I hope that some of it can rub off.  Also, 
if anyone thought that point free programming was something 
mostly-recent, mostly-Haskell, they _really_ need to read this paper!


Also, I am fairly sure that some of the high-powered operators in J may 
stress Haskell's type system quite severely, if not to the breaking 
point.  It would be a fascinating exercise to try to reproduce the 
various examples in that paper in as-idiomatic-as-possible Haskell.


While J's syntax is somewhat on the terse side, its power regarding 
practical mathematics is superb -- especially when it comes to 
polymorphism. 


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] AT solution: rebinding = for restricted monads

2006-12-19 Thread Jacques Carette

First, I believe that this paper
http://homepages.inf.ed.ac.uk/ratkey/param-notions.pdf
is intimately related to WitnessMonad.

David Roundy wrote:

Rebinding the do notation is at least reasonably clean, it's just that we
don't want to lose the ability to mix with ordinary monads.  
This inability is what made me suggest an experimental syntax extension 
rather than rebinding.



I'm not sure that syntax extensions are very often a good idea...
  
I view syntax extensions as a proof-of-concept mechanism: you show that 
your new idea, if it had proper syntactic support, can make some kinds 
of programming a lot easier and clearer.  Then hopefully the extension 
gets incorporated into official syntax.  If not, then the extension 
should in time disappear.


The main advantage is to make experiments (for new syntax) possible.  
Right now, the barriers to surmount for syntactic matters is quite 
high.  I agree that it is rare that a syntax extension is really a 
good idea.



data Witness a b

instance Monad m = WitnessMonad m where
W m = Witness () ()
(=) = Prelude.(=)
() = Prelude.()
return = Prelude.return
fail = Prelude.fail

which I think is quite pretty.  It allows the Monadlike object to have kind
* - *, while still allowing us to hide extra witness types inside and pull
them out using the W function.
  
Yes, if you can make that work, that is quite pretty. 


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Debugging partial functions by the rules

2006-11-15 Thread Jacques Carette



Robert Dockins wrote:

Or how about ??

lookupVarible target env =
   case [ v | (n,v) - env, n==target ] of
  (x:_) - x
  _ - assert False $ BUG: Unexpected variable out of scope 
++(show target)++ in environment ++(show env)


Other have pointed out that, in the CURRENT Haskell semantics, the above 
is quite difficult to reason about.


Note that the whole point of Wolfram Kahl's Pattern Matching Calculus is 
to restore equational reasoning to pattern-matches.

http://www.cas.mcmaster.ca/~kahl/PMC/

Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A type class puzzle

2006-10-31 Thread Jacques Carette

Greg Buchholz wrote:

I guess it just looks really strange to my eyes.  For example, foo
and bar are legal, but baz isn't.  That's what I was thinking of the
situation, but I guess the type classes iron out the differences. 
  

foo :: Int - Int - Int - Int
foo 0 = (+)

bar :: Int - Int - Int - Int
bar 1 x = succ

baz :: Int - Int - Int - Int
baz 0 = (+)
baz 1 x = succ




This could be understood as a weakness in the de-sugaring of 
pattern-matching, because


bong :: Int - Int - Int - Int
bong 0 = (+)
bong 1 = \x - succ

is just fine.

Jacques

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Numeric type classes

2006-09-20 Thread Jacques Carette


Whenever people start discussing the Numeric type classes, the true 
scope of what a refactoring can (and should?) be is frequently 
under-estimated.  The 'structure' of algebraic objects in mathematics 
has been studied quite a lot (in mathematics and in CS, but not so much 
by programming language people it seems).  So I point out work like

http://www-sop.inria.fr/cafe/Manuel.Bronstein/libaldor/html/
which already has a richer set of type classes, and that's just 
Aldor's prelude.  When you get going, you get the Algebra library

(http://www-sop.inria.fr/cafe/Manuel.Bronstein/algebra/)
which is _huge_.  And most of the discussion on Numeric has been around 
the algebraic (Monoid, Ring, Normed, etc) structures that Numeric right 
now 'hides'.

[Hopefully this answers your 'relevance' question].


Computer programming is of course extremely nominal to provide
abstraction and seperation of concerns.  Yes, anonymous functions are
handy, but I could give them up if I had named local functions.
Yes, you can even go to unlambda and only use combinators.  Practically
we find names extremely useful.
  
I am NOT arguing for no names!  I also like names.  What I am arguing 
for is to

a) be able to use names whenever convenient
and more importantly
b) be able to provide _renamings_ when previously chosen names are 
_inconvenient_.
In many ways, this is what ML's with type foo = bar qualifiers allow 
you to do to a certain extent when putting together modules/functors.


It is also the basic idea behind the Adaptor and the Proxy patterns in 
OO.  All these solve the same problem: how do you get around the issue 
that names in a module/class/whatever have been chosen in one way, and 
you need to use them in another.


Various algebraic specification languages have 
thus adopted this too, so that you are not forced to give unique names 
to all your concepts, you can in fact give them meaningful names 'in 
context', and use a remapping when you want to say that you obey a 
particular interface.



This sounds neat, but I'd be worried about how cumbersome it was in
practice.
  
In practice, name clashes do not appear that often, so unique names are 
quite common.  Name clashes tend to appear only for the most basic 
concepts that are highly polymorphic (like Monoid and Group!).  But the 
same happens with generalized Container data-structures too (you can 
'push' onto both a Stack and a Queue, but might want to use different 
names even though the concepts are essentially the same).


It appears to work quite well.  See Specware
http://www.specware.org/index.html
and many of the splendid papers available at
http://www.kestrel.edu/home/publications/

Another line of work is Maude (http://maude.cs.uiuc.edu/), with explicit 
renamings

http://maude.cs.uiuc.edu/maude2-manual/html/node78.html
and more importantly VIEWs
http://maude.cs.uiuc.edu/maude2-manual/html/node81.html
(which have been talked about a lot on the various Haskell mailing 
lists, but Maude has had it implemented for quite some time).


There are plenty of others, like CASL (http://www.cofi.info/CASL.html) 
and the OBJ family (http://cseclassic.ucsd.edu/~goguen/sys/obj.html) 
with similar features.


In other words, the specification language people have been down this 
road quite some time ago, and seem to have worked out a fair bit of it.  
PL people should now liberally borrow all these good ideas IMNSHO.



Thanks.  The ML interface paper looks quite interesting.  Are you aware
of any implementations?
  
No - but pressure is slowly building to do so.  It is not an easy task, 
but as the Ocaml developers themselves are discovering as they are 
heavily 'functorising' some of their legacy code, there is a real need.  
I would be willing to believe that if there was a real push to use 
common type classes across GHC/Hugs/nhc/etc, the same phenomenon would 
'appear'.


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Numeric type classes

2006-09-14 Thread Jacques Carette

David Menendez wrote:

 * Having (+) work on lists, tuples and all the other monoids would
   make error messages more complicated.



It gets worse than that. Imagine trying to explain to someone why 1 +
sin is actually \a - const 1 a + sin a.
  
It isn't that hard - it is done routinely in mathematics courses.  In 
fact, that is what 1+sin means in Maple today (and has for 25 years).  
It is also what it means in MuPAD.  AFAIK, that is also what 1+Sin means 
in Mathematica.  That is what polymorphism is all about! [This is really 
equational-theory polymorphism rather than parametric polymorphism, but 
that's a minor detail, since Monad polymorphism is _also_ 
equational-theory polymorphism].


This kind of polymorphism [where you add the 'right number' of arrows on 
the left] is quite useful.  Things like differential operators become 
quite tiresome to write down if you have to pedantically spell 
everything out, even though there is only one 'sensible' way to 
interpret a given expression [1].


In the very same way that fromInteger can project a literal integer into 
other typeclasses, one can project values into spaces of functions by 
just adding arrows on the left (ie exactly what const does).  It is 
possible to make this quite formal, but you need Natural(s) (as an 
additive monoid) on the type level, and then be able to be polymorphic 
over _those_ to do make it all work.  It should even be decidable [but 
that part I have not checked].  Something I should write up one of these 
days, but in the meantime go read [1]!


Jacquces

[1] Bjorn Lisper and Claes Thomberg have already investigated something 
very close to this, see

http://www.mrtc.mdh.se/index.php?choice=publicationsid=0245
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Numeric type classes

2006-09-14 Thread Jacques Carette



[EMAIL PROTECTED] wrote:
That is what polymorphism is all about! 


Not in this context, sorry. This is a convention. Another one may give 
you

an abomination, e.g., 1+sin means 1 plus the addres of the sin routine.
(Of course not in a 'decent' language, but I know a few undecent.


No, it is much more than convention.  In this case, it can be made 
completely formal.  The paper I referred to offers one way to do this.  
I sketched another.


Yes, it is possible to have 1+sin become meaningless in 'indecent' 
languages.  But as the mathematics (and Maple and ...) convention shows, 
there is one reasonable way to make this make sense which turns out to 
be quite useful.  In other words, the convention can be turned into a rule.


ML and Haskell have (thankfully) learned a lot from Lisp and Scheme, and 
then proceeded to 'tame' these with static typing.  And this is 
continuing - witness the flurry of type-theoretical research on 
continuations in the last 15 years (and very recent papers on typed 
delimited continuations).  More recently, GADTs have added to the set of 
'safe' programs that can by typed (which Lisp programmers writing 
interpreters knew all along).  I am saying that the case of 'adding 
arrows to the left' is another safe practice.  I backed myself up with a 
published reference [ie I took your comment regarding some of my 
previous haskell-cafe postings seriously!].


Jacques
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Numeric type classes

2006-09-13 Thread Jacques Carette
Your solution would imply[1] that all Rational are multiplicatively 
invertible -- which they are not.


The Rationals are not a multiplicative group -- although the _positive_ 
Rationals are.  You can't express this in Haskell's type system AFAIK.


Your basic point is correct: if you are willing to use a tag (like 
Multiply and Add), then you can indeed have a domain be seen as matching 
an interface in 2 different ways.  Obviously, this can be extended to n 
different ways with appropriate interfaces. 


Jacques

[1] imply in the sense of intensional semantics, since we all know that 
Haskell's type system is not powerful enough to enforce axioms.


PS: if you stick to 2 Monoidal structures, you'll be on safer grounds.

Brian Hulley wrote:
If the above is equivalent to saying Monoid is a *superclass* of 
SemiRing in two different ways, then can someone explain why this 
approach would not work (posted earlier):


   data Multiply = Multiply
   data Add = Add

   class Group c e where
   group :: c - e - e - e
   identity :: c - e
   inverse :: c - e - e

   instance Group Multiply Rational where
   group Multiply x y = ...
   identity Multiply = 1
   inverse Multiply x = ...

   instance Group Add Rational where
   group Add x y = ...
   identity Add = 0
   inverse Add x = ...

   (+) :: Group Add a = a - a - a
   (+) = group Add

   (*) = group Multiply

   class (Group Multiply a, Group Add a) = Field a where ...

If the objection is just that you can't make something a subclass in 
two different ways, the above is surely a counterexample. Of course I 
made the above example more fixed than it should be ie:


   class (Group mult a, Group add a) = Field mult add a where ...

and only considered the relationship between groups and fields - 
obviously other classes would be needed before and in-between, but 
perhaps the problem is that even with extra parameters (to represent 
*all* the parameters in the corresponding tuples used in maths), there 
is no way to get a hierarchy?


Thanks, Brian.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   >