Control-C during startup raises error

2004-10-08 Thread Frank Atanassow
Dunno if you actually consider this a bug or not, but since the message 
says to report it... This is on Mac OS X 10.3.5 with GHC 6.2.1.

$ ghci
^Cghc-6.2.1: internal error: main thread has been GC'd
Please report this as a bug to [EMAIL PROTECTED],
or http://www.sourceforge.net/projects/ghc/
In other words, if you press control-C during startup you get this 
error message. Not unreasonable, but I suppose if it is easy to check 
for such an interrupt it is better not to issue an error message which 
advises the user to post a bug report.

ghci works fine for me otherwise, although ghc itself produces bad 
executables on OS X. (I have not yet tried the 6.2.1. installation 
advice for OS X which appeared on the other list.)

Regards,
Frank
___
Glasgow-haskell-bugs mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: [Haskell-cafe] closed classes [was: Re: exceptions vs. Either]

2004-08-14 Thread Frank Atanassow
On Aug 9, 2004, at 5:00 PM, Simon Peyton-Jones wrote:
Closed classes are certainly interesting, but a better way to go in 
this case is to allow the programmer to declear new kinds, as well as 
new types.  This is what Tim Sheard's language Omega lets you do, and 
I'm considering adding it to GHC.
FWIW, Martin Wehr designed a Haskell-like type system with:
* not only datakinds, but datasuperkinds, datasupersuperkinds, etc., in 
short, data-n-types for every finite dimension n, all parametric,

* along with parametrically polykinded, polysuperkinded, etc., in 
short, poly-n-morphic functions,

* along with map, simple folds and nested folds for all these things,
* not to mention an algorithm for principal type inference
in his 1998 paper:
@misc{ wehr98higher,
  author =   M. Wehr,
  title =Higher order algebraic data types,
  text = Martin Wehr. Higher order algebraic data types
  (full paper). Technical report, University of
  Edinburgh, URL
  http://www.dcs.ed.ac.uk/home/wehr, July 1998.,
  year = 1998,
  url =  citeseer.nj.nec.com/wehr98higher.html
}
The title of the paper is a bit misleading: higher-dimensional is 
better than higher-order, as higher-order functions are the chief 
thing missing from Wehr's system. But these are easily added in the 
standard fashion, which is to say, only at the value level, and by 
simply neglecting the problem of defining folds for datatypes involving 
(-).

Two significant differences between Wehr's system and the one Simon 
described is that every kind in Wehr's system has values, and there is 
no distinguished kind *.

I tried to champion this (very incoherently) in a talk at the Hugs/GHC 
meeting in, I think, 2000, where Tim also presented some of his early 
ideas on datakinds.

(BTW, given such an expressive system, I think you may find, as I did, 
that the number of ways to represent what amount to essentially the 
same type grows ridiculously large, and this is one of the motivations 
for treating more general notions of type equivalence than equality, 
like for example canonical isomorphy as I am doing in a forthcoming 
paper.)

There is also an extended abstract of Wehr's paper in CTCS (no citation 
handy---I'm posting this from at home), and a categorical semantics 
which is, however, not for the faint of heart:

@article{ wehr99higherdimensional,
  author =   Martin Wehr,
  title =Higher-dimensional syntax,
  journal =  Electronic Notes in Theoretical Computer Science,
  volume =   29,
  year = 1999,
  url =  citeseer.nj.nec.com/wehr99higher.html
}
Eelco Visser also defines a notion of multi-level type system, and 
gives several examples of how they can be used, in his PhD thesis. One 
of the examples, as I recall, shows how datakinds and polykinded 
functions subsume uni-parameter type classes (without overlapping 
instances).

Regards,
Frank
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Relating functors in Category Theory to Functor

2004-07-02 Thread Frank Atanassow
On Jun 29, 2004, at 6:46 PM, Iavor S. Diatchki wrote:
In Haskell, natural transformations are polymorphic functions, tau 
:: f a - g a. For example, maybeToList :: Maybe a - [a].

actually i think this is a good approximation.  not all polymorphic 
functions are natural transformations, but simple ones usually are.
I think you have it backwards, unless you are thinking of a more 
general notion of polymorphism than parametric polymorphism.

Ad hoc polymorphic functions may or may not be natural; you have to 
verify the naturality condition in each case.

But every parametrically polymorphic function is a natural 
transformation, though the converse fails: not every natural 
transformation is parametrically polymorphic. In particular, some 
natural transformations are generic functions (polytypic), and their 
components (instantiations at a type) are not all instances of a single 
algorithm.

I'm not sure if every natural transformation on endofunctors on a 
category modelling a Haskell-like language is denoted by either a 
parametric or generic term.

Regards,
Frank
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Join and it's relation to = and return

2004-06-11 Thread Frank Atanassow
On Jun 9, 2004, at 9:39 AM, Jerzy Karczmarczuk wrote:
I have *nothing* to add, just a question.
Do you /anybody/ know of any edible work on ADJUNCTIONS in the context 
of Haskell structures? Perhaps instead of searching for 'inverses' one 
should think more about adjoints?...
Yes, I think this is the right way to go. If you look at work by Power, 
Thielecke and Streicher on continuations [*], you will find that they 
model type negation as a self-adjoint functor on a closed premonoidal 
category, and IIRC a closed premonoidal category is equivalent to a 
thing called a closed kappa-category with a computational monad on it. 
The self-adjointness corresponds to the involutivity of negation.

This all depends on the tensor product being commutative. It's also 
possible to drop commutativity, in which case you end up with _two_ 
exponentials, also called residuals, corresponding to the two ways to 
curry a non-commutative product. Such a category can serve as a 
proof-theoretic model for Lambek calculus, which is used in categorial 
grammar.

In this situation, if you demand a multiplicative inverse, or 
equivalently a dual to the unit, you get _two_ negations or (better I 
think) `dualizers'. Barr treats this in a few of his papers, including 
[1], and [2] for the simpler commutative case. There is a neat way of 
forming such categories, called the Chu construction [2], which has 
provoked much interest in the categorical community.

I once wrote some notes on something I called action logic, which tried 
to organize these ideas at a simpler level. The logic was 
non-commutative and had two dualizers like I said, and was designed to 
be sound and complete for a model where every proposition was 
interpreted by an adjoint pair of functions (or, just a Galois 
connection), and dualization by replacing a function by its (unique) 
adjoint. It all worked out very nicely because adjoints have such neat 
properties. I still have the notes if you're interested.

[*] CiteSeer seems kind of broken, so try:
http://www.google.com/search?q=site%3Aciteseer.ist.psu.edu+thielecke
[1] Michael Barr, Non-symmetric *-autonomous categories, 1999.
ftp://ftp.math.mcgill.ca/pub/barr/asymmps.zip
[2] http://citeseer.ist.psu.edu/251562.html
[3] http://citeseer.ist.psu.edu/barr96chu.html
Regards,
Frank
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell-Cafe Digest, Vol 10, Issue 3

2004-06-11 Thread Frank Atanassow
First, concerning your question about monads and multiplication: a 
monad on category C is exactly a monoid object in the category [C-C] 
of endofunctors on C, and natural transformations between them. A 
monoid in a category is, as you expect, an object X with arrows 
m:X*X-X and u:1-X satisfying some laws, where * is the monoidal 
tensor and 1 is its unit. In [C-C], * is functor composition and 1 is 
the identity functor; so m becomes `join' and u becomes `return'. See 
the Example at the bottom of page 75 in Chapter 4 of [1].

On Jun 10, 2004, at 4:23 PM, Ron de Bruijn wrote:
For a counter-example, think of the dual category
Set^{op}.  A morphism
f : a - b in that category means that there is a
function f^{op} : b - a
where a and b are sets, however f probably isn't a
function at all.
Well, what is it then?
The short answer is: it's the formal dual of a function, and that's 
all. You will have to get used to this fact about categorical duality, 
that it's just a formal construction and has no deep meaning in and of 
itself.

The short long answer is: it's an antifunction, or a predicate 
transformer (I think).

What is an antifunction? Well, if you factor a function as a surjection 
followed by a bijection followed by an injection (as you can always do 
uniquely in Set using image factorization), then you can understand a 
function as something which identifies, then renames, then adjoins 
elements of a set. If you turn this map around, and look at what 
happens to the elements on their way home, you can see that what 
happens is some elements get deleted, then renamed and then copied. So 
a function identifies, renames and adjoins while an antifunction 
deletes, renames and copies.

To formalize this perspective, you can view a(n antiset) as a boolean 
algebra using the faithful embedding of Set^{op} in Set via the 
contravariant powerfunctor. The action on arrows turns an antifunction 
into what I imagine is called a predicate transformer. This is nicely 
explained in Vaughan Pratt's paper [2] which, BTW, is about the Chu 
construction which I mentioned in my last post to Jerzy.

Is the following a good summary?
A multiplication is just a name for an operation that
is defined or not defined for each mathematical
construction in terms of to which laws the operation
should comply. The laws are then things like
communativity and so on.
Multiplication is just a name often used for any operation which is 
typically but not always associative and has right and left units, and 
is perhaps also commutative. Addition is exactly the same. If one has 
two operations which satisfy these properties, then one often 
distinguishes them by saying one is addition and the other is 
multiplication.

It is all informal convention; the important things are the laws.
[1] Andrea Asperti and Giuseppe Longo. Categories, Types and 
Structures. 1990. Available here: 
http://www.di.ens.fr/users/longo/download.html

[2] Vaughan Pratt. The Stone Gamut: A Coordinatization of Mathematics. 
In Proc. LICS '95, 1995.
http://boole.stanford.edu/pub/gamut.ps.gz

Regards,
Frank
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Toy application advice wanted

2004-05-07 Thread Frank Atanassow
On May 6, 2004, at 6:59 PM, S. Alexander Jacobson wrote:
I think someone wrote a book about multi-media
apps in Haskell (I've seen a chapter somewhere
from Conal Elliot) but I don't remember what it
was.
Probably Paul Hudak's The Haskell School of Expression.

  http://www.haskell.org/soe/

I had forgotten about this; maybe the soundwave GUI app idea is not so 
unrealistic? I haven't read SOE, but my impression is that it comes 
with a simple graphics library, and an on-topic book might get a novice 
up to speed quickly.

Regards,
Frank
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Toy application advice wanted

2004-05-05 Thread Frank Atanassow
On May 3, 2004, at 5:52 PM, [EMAIL PROTECTED] wrote:
I've got an interesting task this week for my job.  (Note that this 
will undoubtably last for longer than a week).  I'm evaluating several 
high-level languages as development vehicles for our next suite of 
applications.  The languages I'm currently considering are Scheme, 
Erlang, and Haskell...

The toy application I've designed for myself is a simple GUI-based 
app that can load a Sun .au format sound file, display its waveform in 
a window, perform some simple filtering, play the sound for the user, 
and then save the changed sound file back out to disk.  If I get 
familiar enough with the respective environments I'd like to add 
zooming in/out and scrolling to the waveform display...

I have an amortized four days (32 hours!!!) to implement this simple 
application in Haskell...

Any advice/pointers/flames welcome.  Thanks in advance.
Frankly, I think it's completely unrealistic to expect to be able to 
fairly evaluate Haskell in 32 hours. As you noted yourself, Scheme and 
Erlang, being strict, are much closer to conventional programming 
languages than Haskell is, so it's easier to transfer skills to them. 
Furthermore, they're untyped, and learning how to exploit Haskell's 
static typing is one of the bigger hurdles to learning how to exploit 
Haskell.

Even if, as you wrote in a later post, you lower your sights to 
something less ambitious than a full-blown GUI app (which I think is a 
good idea), it's hard get a grasp on concepts like laziness, recursive 
datatypes, parametric polymorphism, monads, type classes and so on in 
less than a week, even for experienced programmers. At best, I imagine 
you'll come away curious and hungry for more; but clearly that doesn't 
suffice for a language evaluation.

Of course, the fact that Haskell can't be grasped in a day (or week) 
can be construed as a practical argument against its adoption. On the 
other hand, if you accept that there's no such thing as a free lunch, 
you might consider that a merit; what is the point of adopting a new 
language if it doesn't change the way you think about programming, and 
let you write old programs in new, perhaps better, ways? [1]

While Haskell is IMO the best programming language around, and I want 
to encourage its broader adoption, if you want a well-designed language 
with good implementation and support which permits swifter skill 
transfer, may I (strongly) recommend you add Objective Caml to your 
list of candidates? Once you acquire some experience with an ML-like 
language such as OCaml, which after all resembles Haskell in many ways, 
you will, I believe, find yourself better equipped to evaluate Haskell.

Regards,
Frank
[1] Think about polynomials and real numbers. Complex numbers were, I 
believe, invented specifically to ensure that every polynomial equation 
has a solution. So, to address some problems, we need to take a step 
backward before we can take one forward.

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] HaXml and XML Schema

2004-03-17 Thread Frank Atanassow
On Mar 10, 2004, at 8:56 AM, [EMAIL PROTECTED] wrote:
Example (readers familiar with the problem may
skip this):
salutationDear Mr.nameRobert Smith/name./salutation
This structure is represented by the XML Schema

xsd:element name=salutation
xsd:complexType mixed=true
xsd:sequence
xsd:element name=name type=xsd:string/
/xsd:sequence
/xsd:complexType
/xsd:element
How would you represent this in Haskell?
A first idea may be to store the enclosing strings:
data Salutation = Salutation String Name String

This approach is not scaling well. E.g., there may be multiple
names in the text...
No, according to the content model, there must be exactly one 
occurrence of name in the content of salutation: not zero, and not 
more than one. To allow zero or more you need to add minOccurs and/or 
maxOccurs attributes. This is one of the ways that mixed content in XML 
Schema differs from that in DTD's, and is treated in the references 
Johan posted.

So, on the contrary, your first declaration:

  data Salutation = Salutation String Name String

is a better translation of this schema (fragment), than your second 
attempt:

  data Salutation' = Salutation' [Either Char Name]

Regards,
Frank
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


[Haskell] AC-unification libraries?

2004-02-19 Thread Frank Atanassow
In the near future I expect to have need of an implementation of some 
basic term rewriting system algorithms, most notably:

  - term matching and unification
  - same, modulo associativity
  - same, modulo associativity and commutativity
The first, of course, is easy to do myself; the second, I imagine, is 
not too difficult but, as I hear the last is very difficult to 
implement halfway efficiently, I would be grateful if someone could 
point me to an existing Haskell implementation. (No, I could not find a 
pointer at the Haskell website, or by scouring the web.)

BTW, bonus points if such a framework supports extra things like checks 
for TRS confluence, termination, enumeration of critical pairs, 
Knuth-Bendix completion, etc.

Thanks in advance.

Regards,
Frank
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Haskell naming conventions

2003-12-24 Thread Frank Atanassow
On Dec 24, 2003, at 2:29 AM, Sean L. Palmer wrote:

It occurs to me that Haskell would be quite a bit easier for OO and 
traditional programmers to grasp if Haskell would actually use the 
correct, or at least more commonly used, names for things.
I don't think changing a few keywords will have any significant impact; 
the differences between Haskell and Java run much, much deeper.

So it makes me wonder why the use of the data keyword... wouldn't it 
make more sense to say: 
 
type Maybe a = Nothing | Just a
No, it wouldn't. There is a difference between a `type' and a 
`datatype': a datatype is a special kind of type which comes equipped 
with a set of functions---the data constructors---which interact nicely 
with pattern-matching so that one can determine exhaustiveness and 
non-ambiguity of matches. Types in general, for example the Double 
type, do not satisfy these conditions.

Likewise with class, type class, and instance:
 
class Eq a where
        (==) :: a - a - Bool
 
That actually declares a type class, not a class.
According to the normal rules of English, every `type class' is 
necessarily a `class', isn't it?

So why the use of the keyword class?  Is it done merely to confuse C++ 
and Java programmers?
Yikes! The secret is out! ;)

The concept of type class in Haskell apparently roughly corresponds to 
the concept of interface in Java.  So why not call it interface? 
Haskell type classes are older than Java. Perhaps you should petition 
Sun instead?
 
Instance is also confusing:
 
instance Eq Integer where
  a == b = a `integerEq` b
 
That actually declares that Integer is a type, not an instance in 
the traditional use of the word.
No, that Integer is a type is a necessary precondition of this instance 
declaration. The declaration rather says that Integer belongs to the 
type class Eq.

The choice of the syntax `instance' here probably arises from the idea 
that a type class is relation (predicate) on types; many people say 
that, for example, (x,y) `is an instance of' the relation R if x R y.

A C++ programmer would probably use the word subclass instead of 
instance.
Not really. Up above you already compared type classes to Java 
interfaces. It follows that a Haskell instance T of class C rather 
corresponds to a Java class T which implements an interface C.

Then consider how different a meaning return has in Haskell than it 
does in C.   ;)
return was probably chosen precisely because of what it suggests by 
analogy with C's return. Of course there are big differences, but can 
you think of a better name?

Does anyone else think this is a problem?
I think any difficulties stemming from Haskell's choice of keywords are 
dwarfed by the differences stemming from Haskell's semantics.

If so, is it fixable?
Haskell is much too old to be changing its surface syntax. Of course, 
you can always change GHC's input grammar, fork it, call it Haskell++ 
and see if anybody will use it. :) Or use Template Haskell.

 I guess from reading the many tutorials and FAQ's, that I'm in the 
same boat as everybody else.  I consider myself a pretty bright boy, 
I've learned all kinds of other programming languages, from asm to 
forth to basic, pascal, C, C++, java, and C#...  but this Haskell 
business, has me almost stumped.
The standard response is:

It is practically impossible to teach good programming to students 
that have had a prior exposure to BASIC: as potential programmers they 
are mentally mutilated beyond hope of regeneration. -- Edsger 
Dijkstra, How do we tell truths that might hurt?

http://www.cs.virginia.edu/~evans/cs655/readings/ewd498.html

I mean, I understand the basic ideas pretty easily enough, but there 
seems to be such syntactical wierdness that to understand how to 
program in Haskell above the level of toy programs requires one to 
revisit every single design decision that went into making the 
language and its libraries, and grasp everything along the way, not 
only its function but also its peculiar nomenclature, and usually two 
different ways to say the same thing (maybe more).  Only after 
following this long tortuous path will one ever be able to actually 
write useful programs. 
IMO, there really isn't much `syntactical weirdness' in Haskell; it has 
a pretty accessible syntax and first-time programming students appear 
to exhibit largely the same problems with Haskell syntax as they would 
in other languages (forgotten parentheses, etc.). What you are really 
complaining about is Haskell's semantics, but you don't realize it yet.

If Haskell (or any language of this style) is ever to gain a larger 
following, it would probably be wise to accomodate the existing 
programmer knowledge base a little more.  I believe that the same core 
language, with cleaner design, different keywords, maybe different 
operators, would probably be readily accepted. 
I believe you are very mistaken on that count. Is Howard Dean sure to 
win the American election if he starts speaking with a 

Re: Data representation, maybe reflection, laziness

2003-11-04 Thread Frank Atanassow
On dinsdag, nov 4, 2003, at 00:39 Europe/Amsterdam, Frank Atanassow 
wrote:

For example,  our translator takes the Schema type doc (representing a 
bibliographic entry) ... to a certain ugly datatype X.
Oops. For X I should have written E_doc, that is, the translation of 
Schema type doc is named E_doc (E is for Element); it's used in 
the example at the end.

 main= interact work
 toE_doc = unparse{|E_doc|} . expand{|E_doc|} . reduce{|Doc|}
 toDoc   = expand{|Doc|} . reduce{|E_doc|} . parse{|E_doc|}
 work= toE_doc . (\d - d { authors = filter (/= Dubya) (authors 
d) }) . toDoc

Regards,
Frank
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


GHCI crashes on Mac OS X

2003-10-09 Thread Frank Atanassow
Hi guys,

I'm using GHCI to run code output by Generic Haskell. As I'm sure 
you're aware, GH is a preprocessor. Every once in a while, I 
accidentally load a .ghs file (GH input) rather than the resulting .hs, 
and GHCI crashes with a segmentation fault:

merulo 536 ~/share/cvs/GH-Papers/data-binding $ ghci -fglasgow-exts 
-i$GH_HOME/lib/Prelude ExampleIso.ghs
   ___ ___ _
  / _ \ /\  /\/ __(_)
 / /_\// /_/ / /  | |  GHC Interactive, version 6.0.1, for Haskell 
98.
/ /_\\/ __  / /___| |  http://www.haskell.org/ghc/
\/\/ /_/\/|_|  Type :? for help.

Loading package base ... linking ... done.
Loading object (static) ExampleIso.ghs ... Segmentation fault
I don't understand the message on the last line. Is GHC confusing GH 
source files with something else that has the same extension?

Anyway...

ExampleIso.ghs depends on some other files, but those don't get loaded 
so I assume you don't need them. I can also leave off all the flags and 
it still crashes.

Here is the info:

merulo 543 ~/share/cvs/GH-Papers/data-binding $ uname -a
Darwin merulo.local. 6.8 Darwin Kernel Version 6.8: Wed Sep 10 15:20:55 
PDT 2003; root:xnu/xnu-344.49.obj~2/RELEASE_PPC  Power Macintosh powerpc
merulo 544 ~/share/cvs/GH-Papers/data-binding $ gcc -v
Reading specs from /usr/libexec/gcc/darwin/ppc/3.1/specs
Thread model: posix
Apple Computer, Inc. GCC version 1175, based on gcc version 3.1 
20020420 (prerelease)

OS X version 10.2.8 if it matters.

Thanks.

Regards,
Frank


crashlog
Description: Binary data


ExampleIso.ghs
Description: Binary data
___
Glasgow-haskell-bugs mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: lexer puzzle

2003-09-26 Thread Frank Atanassow
On vrijdag, sep 26, 2003, at 09:16 Europe/Amsterdam, John Meacham wrote:
On Fri, Sep 26, 2003 at 08:59:12AM +0200, Ketil Z. Malde wrote:
I think there is a problem with too much overloaded syntax.  Perhaps
it is time to put non-ASCII characters to good use?
For instance, function composition could use the degree sign: 
and leave the . for module qualification.
why not the actual functional composition operator:  or 

we could also make good use ofand all the other fun
mathematical operators.
This is very readable, but not very writable.

Until I get a keyboard with a  key, I would prefer to restrict the 
syntax to ASCII/whatever and simply make it the editor's responsibility 
to display source code using special characters.


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: pretty newby

2003-09-24 Thread Frank Atanassow
On woensdag, sep 24, 2003, at 17:46 Europe/Amsterdam, John Hughes wrote:
What's needed is a parser that can parse
comments, and tie them to the *right place* in the abstract syntax 
tree.
Figuring out what a comment is really commenting is probably extremely
hard...
The commenting conventions of Haddock serve this purpose pretty well. 
And Haddock's parser must hold onto the comments to do its job. Maybe 
that's a good place to start on a Haskell pretty-printer.

Regards,
Frank
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Polymorphic Recursion / Rank-2 Confusion

2003-09-22 Thread Frank Atanassow
On maandag, sep 22, 2003, at 00:07 Europe/Amsterdam, Brandon Michael 
Moore wrote:
Can anyone tell me what's wrong with the following derivation?
Without going through your derivation completely, the problem is almost 
certainly polymorphic recursion. Vector is a nested datatype---its 
definition calls itself with different type arguments---and so, although

  collect :: (v - [a]) - (w - [a]) - Vector v w - [a]

the recursive call to collect in, for example, the second clause, has a 
different type, namely:

  collect :: (v - [a]) - ((w,w) - [a]) - Vector v w - [a]

However, in the absence of an explicit type signature, Haskell will 
assume that the types of the top-level and recursive occurrences of 
collect are equal, which they are not, so you'll get a type error.

Indeed, the error messages Dominic posted:

 ERROR Test1.hs:23 - Type error in function binding
 *** Term   : coal_
 *** Type   : (a - b) - (c - [d]) - Vector_ a e - b
 *** Does not match : (a - b) - ((c,c) - [d]) - Vector_ a (e,e) - 
b
 *** Because: unification would give infinite type

 Test1.hs:25:
 Occurs check: cannot construct the infinite type: t = (t, t1)
 Expected type: (t, t1)
 Inferred type: t
 In the first argument of `coalw', namely `w1'
 In the first argument of `(++)', namely `(coalw w1)'

corroborate this argument.

Or, are you asking instead why it is that type inference does not work 
for polymorphic-recursive functions?

Regards,
Frank
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Polymorphic Recursion / Rank-2 Confusion

2003-09-20 Thread Frank Atanassow
On zaterdag, sep 20, 2003, at 13:01 Europe/Amsterdam, Dominic Steinitz 
wrote:

Can anyone tell me why the following doesn't work (and what I have to 
do to
fix it)? I thought by specifying the type of coalw as rank-2 would 
allow it
to be used both at a and (a,b).
This will never work. A function expecting a value of type forall c d. 
c - [d] needs to be passed a function which can turn an arbitrary 
value of an arbitrary type into a list of values of every other, 
arbitrary type; the only such (total) function I can think of is const 
[].

The type checker is complaining because the value you pass it is a 
function which only accepts tuple inputs; it has to work for _all_ 
inputs of any type.

If you explain what you're trying to do here, maybe someone can suggest 
a solution.

Regards,
Frank
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: (Off-topic) Question about categories

2003-09-18 Thread Frank Atanassow
On donderdag, sep 18, 2003, at 14:44 Europe/Amsterdam, Graham Klyne 
wrote:
My problem is this:  how does it make sense to define an equality of 
morphisms without some well-defined concept of equality on the 
underlying objects to which they apply?  That is, given object X and 
an object Y, it is possible to examine them and determine whether or 
not they are the same object.
Yes; there is an equality relation on the objects.

The collection of objects of a (small) category forms a set. You can 
only form sets of things which satisfy an equality relation, because 
the extensionality axiom, which defines when two sets are equal, is 
defined in terms of equality of members. By foundation, there has to be 
an equality relation on the individuals in any such universe of sets.

And if the underlying objects have such a concept of equality, what 
prevents them from being sets or members of sets?
I don't understand your confusion. The objects can certainly be sets; 
the category of sets and functions (which is not small, BTW) is the 
archetypal example. To determine if two sets are equal you use 
extensionality: sets A and B are equal iff they have equal members.

Nothing prevents forming a set of objects either.

 But categories are presented as being more general than sets.
They're more general in the sense that:

  a) the collection of objects can form a proper class, so a large 
discrete (no non-identity arrows) category is bigger than any set;

  b) the definition of a category involves more data than the 
definition of a set. To define a set, you only need to specify its 
elements. To define a category you need to give a collection of 
objects, an indexed collection of homsets and a composition operator.

  c) the collection of all sets forms one particular category.

Does anyone see the cause of my non-comprehension here?
AFAICT, the source of your confusion is that you erroneously believed 
that sets and/or members of sets couldn't (necessarily) be compared for 
equality, but the fact is they always can, by definition. If the 
category is large, there are additional issues, but there is still an 
equality relation on the objects.

Does that clear things up?

Regards,
Frank
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Exhaustive Pattern-Matching

2003-08-28 Thread Frank Atanassow
On Thursday, Aug 28, 2003, at 08:47 Europe/Amsterdam, Steffen Mazanek 
wrote:

Thank you all for your help. I will try this ghc-flag.
It is interesting as well, that in contrast to Haskell Standard ML 
ensures,
that pattern-matches are exhaustive and irredundant.
SML has the same limitations w.r.t. guards as Haskell; Haskell 
compilers can and do check exhaustiveness, but not redundancy because 
matches are tried sequentially. I believe SML matching is also 
sequential. If there is a difference between the two, it must have to 
do with laziness.

Regards,
Frank
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Haskell and algebra

2003-08-14 Thread Frank Atanassow
Gustavo Villavicencio wrote:

 Hi all,

 I am trying to understand the algebraic laws and operators
 behind a functional expression...

 f = g \equiv g* . f

 in the Kleisli Star context. Is this right?
Yep.

 If it is so, can I combine g*.f with a fork for example?

What do you mean by a fork?

Regards,
Frank
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Haskell and algebra

2003-08-14 Thread Frank Atanassow
Gustavo Villavicencio wrote:
Frank Atanassow said:

What do you mean by a fork?

 So, the question is, if i have

 f : A - T B and g : A - T C

where T is a monad, i.e. an endofunctor, can i combine f and g as

 f,g : A - T (BxC)

knowing that T involves side effects?
I guess you are asking: if category C has finite products, and T is a 
strong monad on C, does the Kleisli category have finite products? The 
answer, I imagine, is not in general, but yes if the monad is 
commutative.

  pair :: (a - m b) - (a - m c) - (a - m (b,c))
  pair f g a = do b - f a
  c - g a
  return (b, c)
You need to pick an order for the first two actions.

I haven't done the proof, though.

Regards,
Frank
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Haskell and algebra

2003-08-14 Thread Frank Atanassow
Frank Atanassow wrote:

Gustavo Villavicencio wrote:

  Hi all,
 
  I am trying to understand the algebraic laws and operators
  behind a functional expression...
 
  f = g \equiv g* . f
 
  in the Kleisli Star context. Is this right?
Yep.
Oops, or rather, not quite.

  m = g

means

  g* m

The Kleisli composition (-)* . (-) is sometimes written as (@@):

  (@@) :: (Monad m) = (b - m c) - (a - m b) - (a - m c)
  (f @@ g) x = let m = f x in m = g
Regards,
Frank
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Haskell and algebra

2003-08-14 Thread Frank Atanassow
The Kleisli composition (-)* . (-) is sometimes written as (@@):

  (@@) :: (Monad m) = (b - m c) - (a - m b) - (a - m c)
  (f @@ g) x = let m = f x in m = g
Man, I can't get anything right today. I meant:

  (g @@ f) x = let m = f x in m = g

Apologies for the flooding.

Regards,
Frank
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Interesting Read (fwd)

2003-02-20 Thread Frank Atanassow
Andrew J Bromage wrote (on 20-02-03 10:26 +1100):
 All that is required of a theorem is that it is correct.
 
 A tool, on the other hand, not only has to work (i.e. it has to
 correctly accomplish some task), it also has to be safe to use, its
 controls must be meaningful to the intended user, it should be in
 some way better than the tool which it replaces and so on.

In practice, one requires more of a theorem than that it simply be
correct. One also wants simple, readable and, when possible, multiple
inequivalent proofs of the theorem, organized into meaningful lemmas which
might be reused to prove other theorems. Conversely, one wants the proof to
use the theorems and lemmas of others.

Much the same, you will note, holds of programs and program specifications.

In addition, some people consider constructive proofs superior to classical
proofs, not only for practical (e.g., deriving an algorithm) reasons, but also
for pedagogical and philosophical reasons.

And these days, the practice of writing executable specifications in, oh, say
Haskell, is becoming increasingly popular.

Regards,
-- 
Frank
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Prototyping language extention

2003-02-10 Thread Frank Atanassow
Dmitry Malenko wrote (on 09-02-03 21:57 +0200):
 As part of my research I'm going to create prototype system implementing
 some extentions concerning at most module system to the language and to
 toy with it a little.  So I wonder what is the best way to do that?
 
 Should I cope with compiler sources, or write an interpeter from
 scratch, or maybe something else?

See:

  Iavor S. Diatchki, Mark P. Jones and Thomas Hallgren. A formal specification
  of the Haskell 98 Module System. PLI Haskell Workshop, 2002.
  http://www.cse.unsw.edu.au/~chak/hw2002/

This describes an executable specification (in Haskell) which complements the
specification of the typing system described in Typing Haskell in
Haskell. It's perfect for prototyping and tinkering with the module system.

Regards,
-- 
Frank
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Interpret haskell within haskell.

2002-12-20 Thread Frank Atanassow
Lauri Alanko wrote (on 20-12-02 11:26 +0200):
 For what it's worth, I will probably be doing my MSc thesis on
 adapting eval (and reflection in general) to a statically typed
 language. Essentially you need a run-time representation of the
 environment and the typing context, and a type system which groks the
 relationship between run-time and static types.
 
 Strangely enough, I haven't found any real research on this particular
 subject. There's lots of work on related areas, eg. dynamic types and
 intensional polymorphism, but nothing that's really motivated by eval
 and reflection.
 
 Any suggestions for references are welcome. :)

There is quite a bit of work on staged evalution, metaprogramming,
quasiquotation, reflection and run-time code generation in typed and ML-like
languages. It's a very active and, IMO, promising area of research.

Here is just a sample. Just follow the bibliography trail, or use CiteSeer:

  Rowan Davies and Frank Pfenning. A modal analysis of staged computation. In
  POPL '96, pp. 258-270, 1996.

  Jean Goubault-Larrecq. Logical Foundations of Eval/Quote Mechanisms, and the
  Modal Logic S4.
  http://citeseer.nj.nec.com/goubault-larrecq97logical.html

  MetaML homepage
  http://www.cse.ogi.edu/PacSoft/projects/metaml/

  Walid Taha. Multi-Stage Programming: Its Theory and Applications. PhD
  thesis, OGI, 1999.

  MetaOcaml
  http://www.cs.rice.edu/~taha/MetaOCaml/
  (also see Related Systems at the bottom of this page)

  Tim Sheard and Simon Peyton Jones. Template Meta-programming for Haskell.
  Haskell Workshop, Pittsburgh, October 2002.

  Mark Shields, Tim Sheard and Simon Peyton Jones. Dynamic Typing as Staged
  Type Inference. POPL '98.

  Eugenio Moggi et al. An Idealized MetaML: Simple, and More Expressive.
  ESOP '99.

  Pierre Leleu. A Modal Lambda Calculus with Iteration and Case Constructs.
  http://citeseer.nj.nec.com/leleu97modal.html

  Philip Wickline, Peter Lee and Frank Pfenning. Run-time Code Generation and
  Modal-ML. PLDI '98.

Hope this helps!

Regards,
-- 
Frank
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Design patterns in Haskell

2002-12-03 Thread Frank Atanassow
Andrew J Bromage wrote (on 03-12-02 09:52 +1100):
 On Mon, Dec 02, 2002 at 08:26:06AM +0100, Johannes Waldmann wrote:
 
  well I love design patterns, it's just that in Haskell-land 
  they are called higher-order functions, or polymorphic functions, etc.
 
 Can I safely translate that as We use design patterns but we don't
 like the name?
 
 In a different language, you use different design patterns.  Just as
 iterator-based patterns are next to useless in Haskell (where we have
 lazy streams), patterns based on CPS are next to useless in a
 language with state and non-local return constructions.

I think there is a world of difference between GoF-style design patterns and
things like HOFs, polymorphism and whatnot in Haskell.

If you are programming GUI applications in C and use callbacks for the widget
actions, then you are basically using the design pattern which gets formalized
as HOFs in an ML-like language. But C will not guarantee for you that you
callback function is well-typed, hold onto the data in the closure, do the
necessary garbage collection, etc. Those things may seem like details, but
they are what make HOFs robust and practical to use ubiquitously; the callback
design pattern, since it is basically puts the burden on the programmer, can
never hope to achieve that degree of robustness.

Furthermore, design patterns come with a set of informal hints about when and
where to apply the pattern. The notion of HOF is, of course, completely
neutral about this, except insofar as the type system forces you to provide a
HOF where a function is expected. OTOH, the advice that you should use HOFs
to model widget actions or you should use HOFs to allow customization of
applications via hooks (as in Emacs LISP) could reasonably be construed as a
design pattern. So maybe a design pattern can be regarded as a correspondence
between formal concepts and domain-specific ones.

-- 
Frank
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Mathematics and Software Engineering (was slide: useful function? )

2002-12-03 Thread Frank Atanassow
[EMAIL PROTECTED] wrote (on 03-12-02 18:18 +0100):
 Perhaps I should rephrase:
  There's too much mathematics in it, I'm not a mathematician
..and later...
  lot of mathematics.  One of the things I like about FP is that it
  demands more mathematical sophistication of its practitioners, which,
  contrary to some opinions, I think is a Good Thing.
 
 I'm fully with you there, but - as someone already pointed out - I don't
 want to need a math major to read a paper regarding programming.

How many times has this phrase been uttered, I wonder...!

The truth is:

   You DON'T need a math degree to understand such papers.

This may come as a shock, but the reason for this is that they are not written
by or for an audience of mathematicians, but rather for an audience of CS
researchers who actually possess---wait for it---CS degrees! :)

Now, admittedly, there is a lot funny notation, jargon and special concepts in
PLT papers. That's why a few people have undertaken the task of writing
textbooks, surveys and tutorials for people who did not come into this world
with the Library of Congress preloaded in their brains. Two good books which
come to mind are:

  John C. Mitchell. Foundations for Programming Languages. MIT Press, 1996.

  Benjamin C. Pierce. Types and Programming Languages. MIT Press, 2002.
  
I've also collected some freely available ones here:

  PLT Online
  http://www.cs.uu.nl/~franka/ref

That was CS and PLT. Now let's talk about mathematics and algebra.

It doesn't take a degree in mathematics to do or understand mathematics. The
people who get degrees in mathematics get them, not in order to do
mathematics, but rather in order to create/discover (do you favor Plato or
Brouwer?) _new_ mathematics.

People nowadays use the word mathematics as if it were some exotic beast you
can only find in the hinterlands of darkest Africa!  Every field, even
something as mundane as accounting, has its wildly esoteric corners. No one
expects programmers to be experts in twistor theory or inaccessible
cardinals. Hell, no one expects CS researchers to be experts in those bits
either.

Elementary algebra is not one of those esoteric bits. As mathematical topics
go, it's simple, fundamental, ubiquitous and closely connected to FP. It won't
gobble up your grandmother or demand your first-born child.

At a guess, I think most FP programmers could easily get up to speed on the
basics of constructive universal algebra in the context of FP---signatures,
initial algebras, catamorphisms, free constructions, monads, etc.---in the
space of a workshop-length course.

It's common practice for programmers in industry to take training courses on
things like Java beans, ActiveX controls and UML; why not algebra, or proof
theory or operational semantics? At least those things are sure to survive
longer than the latest buzzword technology...

-- 
Frank
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: AW: slide: useful function?

2002-12-02 Thread Frank Atanassow
John Hughes wrote (on 02-12-02 10:27 +0100):
  On Mon, 2 Dec 2002, Andrew J Bromage wrote:
 
   ... If you mention a term like design patterns,
 
  well I love design patterns, it's just that in Haskell-land
  they are called higher-order functions, or polymorphic functions, etc.
 
  -- Johannes Waldmann  http://www.informatik.uni-leipzig.de/~joe/ --
 
 Or maybe more general notions, such as
 
define a recursive datatype together with a catamorphism combinator
 
 or even
 
embed a domain-specific language via combinators over an ADT
 
 There are patterns of that sort in our programs, which we would probably
 rather call design techniques, which aren't so easily captured by a
 higher-order function definition.

And yet these are two examples which might reasonably be adopted, like HOFs,
as language features:

  1) most recursive datatypes, including nested datatypes, determine a unique
 catamorphism, so we could generate it for the user automatically, or
 provide a fold-expression analagous to the case-expression, as in
 Charity;

  2) similarly, many polymorphic datatypes, together with a specification of a
 notion of variable or environment, determine a substitution monad for an
 embedded DSL, so we could generate the map, unit, join, bind and
 substitution functions automatically from the spec, and I imagine one
 could provide a sort of quasiquotation syntax for specifying terms of the
 DSL (which is, though, not so far from do-syntax as we have it now).

There is a strong trend in declarative languages, I think, where design
patterns evolve into more bullet-proof language features, precisely because
design techniques in declarative languages are susceptible to formal
analysis.

For example, I think this is how algebraic datatype declarations (and perhaps
arguably modules) evolved in ML. In a short discussion at the end of
Tatsuya Hagino's thesis (A Categorical Programming Language), he describes
how abstract datatypes were adopted by SML as a primitive part of the
declaration syntax. In the original ML developed for LCF, datatypes were
declared like this:

  absrectype btree = int + (btree # btree)
with leaf n = absbtree(inl n)
and node(t1,t2) = absbtree(inr(t1,t2))
and isleaf t = isl(repbtree t)
and leafvalue t = outl(repbtree t)
and left t = fst(outr(repbtree t))
and right t = snd(outr(repbtree t));;

The modern equivalent is:

  data Btree = Leaf Int | Node (Btree, Btree)

Imagine (or, recall, if you are old enough :) having to declare all those
introduction and elimination primitives for every datatype! Maybe in a few
years we will be looking back and saying the same about catamorphisms and
DSLs...

-- 
Frank
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: cartesian classes

2002-11-26 Thread Frank Atanassow
David Bergman wrote (on 26-11-02 01:29 -0500):
 I would like to know if anyone (maybe Mark P) knows the status of
 Cartesian classes in different Haskell implementations. I.e., does
 anyone implement the suggested functional dependencies or the less
 general parameterized type classes?
 
 I have need for the multi-variable classes quite often (especially in a
 genetic algorithm framework I am building in Haskell). Although I would
 hesitate to extend beyond Haskell 98, if there exist a common view on
 how to best implement Cartesian classes (read it will be part of
 Haskell 2) and/or a stable implementation for it, I am willing to be a
 bit adventurous...

What do you mean by cartesian classes? Do you mean multi-parameter type
classes?

They're supported by GHC and Hugs, but not NHC98 or HBC. The same goes for
functional dependencies.

-- 
Frank
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Rational sequence

2002-10-22 Thread Frank Atanassow
Jerzy Karczmarczuk wrote (on 22-10-02 13:05 +0200):
 What do you think, what
 is the Rational form of 2.3 ? (GHCi says 23/10).
 
 The answer is:
 
 2589569785738035 % 1125899906842624

Er, why?

Because 2.3 is not representable using a double precision float or something?

-- 
Frank
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Rational sequence

2002-10-22 Thread Frank Atanassow
Frank Atanassow wrote (on 22-10-02 15:08 +0200):
 Jerzy Karczmarczuk wrote (on 22-10-02 13:05 +0200):
  What do you think, what
  is the Rational form of 2.3 ? (GHCi says 23/10).
  
  The answer is:
  
  2589569785738035 % 1125899906842624
 
 Er, why?
 
 Because 2.3 is not representable using a double precision float or something?

Oh, sorry. I understand Jerzy to be saying that that big long fraction was the
result that he _wanted_, but instead the opposite seems to be true.

That explains things. :)

-- 
Frank
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Show the components of the sum

2002-09-21 Thread Frank Atanassow

David Roundy wrote (on 21-09-02 07:30 -0400):
 Here is what I'd like to do:
 
 \begin{code}
 data D = A String | B String
 instance  Show D where
 show = showD
 instance  Read D where
 readsPrec _ = readsD
 readD s = (readsA s) ++ (readsB s)
 \end{code}
 
 A is formatted like ``A string''.
 \begin{code}
 showD (A s) = A  ++ s
 readsA s = [(A thestr, r) | (A, x) - mylex s,
 (thestr, r) - mylex x]
 \end{code}
.. [ and similarly for B ] ...
 
 The problem I have is that ghc complains because I've split up the
 definition of showD with readsA defined in between.  This surprised me,
 since it seemed that the code should still be unambiguous.
 
 Is there some nice way around this,

There's no way to intersperse clauses of top-level declarations, no.

 or do I have to define a separate
 function for each constructor, and then have a list in one place (as I do
 with the readsD, in that case because pattern matching won't give me what I
 want), like:
 
 showD (A s) = showA (A s)
 showD (B s) = showB (B s)

Almost. But then showA and showB are partial. It's better to do this:

  showA s = A  ++ s

and similarly for B, and then:

  data D = A String | B String
  instance  Show D where
show (A s) = showA s
show (B s) = showB s
  instance  Read D where
readsPrec _ = readsD
  readD s = (readsA s) ++ (readsB s)

What you are doing here basically is just assigning names to the branches of a
case-expression:

  h sum =
case sum of
  Left x  - ... x ...
  Right y - ... y ...

=

  h sum =
case sum of
  Left x  - (\x' - ... x' ...) x
  Right y - (\y' - ... y' ...) y

=

  h sum =
let f x' = ... x' ...
g y' = ... y' ...
in  case sum of
  Left x - f x
  Right y - g y

=

  f x' = ... x' ...

  g y' = ... y' ...

  h (Left x)  = f x
  h (Right y) = g y

-- 
Frank
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Writing a counter function

2002-07-01 Thread Frank Atanassow

Shlomi Fish wrote (on 29-06-02 17:30 +0300):
 
 I'm trying to write a counter function that would return a tuple whose
 first element is the current value and whose second element is a new
 counter.

John Hughes showed how to do this. Here is a closely related, more abstract
solution which employs existential types. First let me give a paradigmatic
example which is slightly different from what you want: streams.

  data Stream a = forall x. Stm x (x - a) (x - x)

Note that we hide the state type x. This lets us keep the implementation
hidden from the client.

  value (Stm state val next) = val state
  next (Stm state val next) = Stm (next state) val next
  both s = (value s, next s)
  
  unfold :: x - (x - a) - (x - x) - Stream a
  unfold state val next = Stm state val next
  
  -- the naturals
  nats1 = unfold 0 id succ
  -- value nats1 = 0
  -- value (next nats1) = 1
  -- value (next (next nats1)) = 2 ...

In the example above, we use an integer for the state, project it out when we
need a value, and increment it to get the next state. Here's another way to do
it, using a state which is not an integer.

  nats2 = unfold [0..] head tail

Here we just used an infinite list for the state. head :: List Int - Int, so
the state type is now different from method result type.

And here's an example where we use a step of 100 when we create the stream.

  -- step 100
  stm1 = unfold 5 id (+ 100)

But you wanted an object where we can choose the step at each point in
time. OK:

  data MyStream a = forall x. MyStm x (x - a) (a - x - x)
  
  myValue (MyStm state val next) = val state
  myNext arg (MyStm state val next) = MyStm (next arg state) val next
  myBoth arg s = (myValue s, myNext arg s)
  
  myUnfold :: x - (x - a) - (a - x - x) - MyStream a
  myUnfold state val next = MyStm state val next
  
  counter n = myUnfold n id (+)

Now the state-transforming function accepts an extra argument along with the
state. And in fact we were able to generalize the idea of stepping, since we
never had to mention integers or addition till the last line.

Easy, right? You can see the pattern for defining similar sorts datatypes
now. Hide the state type x with a forall, and, for each way of observing and/or
transforming the state, include a function of type x - ...

OO people call this a (functional) object. Mathematicians call it a
coalgebra. There is a notion of coalgebraic (or coinductive) datatype which is
dual to the notion of algebraic datatypes in Haskell; sometimes they're called
codatatypes. The analog of unfold is fold, of methods (sometimes called
destructors or observors) are data constructors, and of the state type is the
accumulator type which is the result of a fold. Some languages like Charity
support codatatypes directly, but in Haskell we can get the same effect with a
local forall.

Actually, you can make this even more OO-like using type classes, but things
seem to get messy if we keep the result type polymorphic so I'll fix it to
Integer:

  class CounterState x where
sValue :: x - Integer
sNext :: Integer - x - x
  
  data Counter = forall x. CounterState x = Counter x
  
  instance CounterState Integer where
sValue = id
sNext step state = step + state
  
  mkCounter :: Integer - Counter
  mkCounter n = Counter n

A disadvantage of this approach is that now you can only have one
implementation for each state type; with unfold, where we stored the functions
in the data constructor fields, we could give many implementations of the
methods for each state type. In OO terms, the type class approach is
class-based whereas the unfold approach is object-based.

The advantage, of course, is that you can use inheritance via the type class
inheritance now.

BTW, I hope that this note will not encourage the OO readers on this list to
objectify _everything_ now, because that leads (IMO) to twisted programs
which often emphasize the wrong sort of flexibility.

Both datatypes and codatatypes have their place:

  * A datatype is called for when you need a collection of finite-sized values
(lists, trees, etc.), and want to be able to traverse them easily. The
fold for a datatype does this for you, and is guaranteed to terminate.

  * A codatatype is called for when you have a collection of infinite-sized or
circular values (streams, automata, etc.) and you want to be able to index
arbitrarily into (grab subparts of) them, without possibility of error or
exposing the representation.

Note that you cannot generally traverse a value of a codatatype: if you try
to fold a stream, the computation will diverge.

On the other hand, you cannot index arbitrarily deeply into a value of a
datatype. Remember our stream example? We could have called the value
function head and the next function tail. You can always apply these to
a stream, and they will never fail. But if you try that with lists, you will
raise an error once you get to the end of it.

-- 
Frank Atanassow, Information  Computing

Re: layout rule infelicity

2002-05-30 Thread Frank Atanassow

[redirected to haskell-cafe]

Ashley Yakeley wrote (on 30-05-02 03:18 -0700):
 At 2002-05-30 02:54, Lennart Augustsson wrote:
 
 If you look at C ( offspring), it's not the {;} that makes the code
 readable, it's the indentation that does.  So why not acknowledge that?
 
 In C, the indentation is an important visual clue, but there are many 
 different indentation styles.

I think there are not so many different _indentation_ styles. The differences
seem mostly to revolve around where to put _the braces_. And one might argue
that the reason for that is precisely that it's so arbitrary, since the
indentation is the main visual clue to structure anyway.

Personally, I always try to factor out arbitrariness from my programs. I think
layout does the same thing for syntax.

 If you're used to braces, complicated Haskell expressions with layout 
 look confusing, since it's not immediately clear which indentation style 
 the layout rules are trying to enforce.

If you're used to C, then layout and indentation will be the least of your
difficulties when you start using Haskell...

Anyway, I have the feeling that, for every person on this list who complains
about layout being unintuitive, there are 10 people who would say the
opposite. Shall we take a poll?

-- 
Frank Atanassow, Information  Computing Sciences, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-3261 Fax +31 (030) 251-379
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Hugs plugin, Haskell Browser

2002-03-14 Thread Frank Atanassow

Robert Giegerich [EMAIL PROTECTED] wrote,
 I often use Haskell demos in teaching algorithms. The problem is that this
 does not integrate well with the rest of the material, e.g. lecture
 notes or slides in PDF or HTML. I'd like to integrate explanations and
 demos and explorative changes to algorithms. This needs better support.
 
 Is there something like a Hugs plugin for Netscape?

Is DVI OK?

The Active-DVI package

  http://pauillac.inria.fr/activedvi/

allows embedding programs into DVI files, along with a host of other features
for presentations (effects, links, transitions, ...). If you look at the demo
included with the distribution you will see a slide with a copy of the Ocaml
interpreter running inside a DVI page.

Files using ActiveDVI specials are still viewable with, for example, xdvi, but
of course most of the features are absent or mangled.

-- 
Frank Atanassow, Information  Computing Sciences, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-3261 Fax +31 (030) 251-379
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: College Student

2002-03-11 Thread Frank Atanassow

My apologies to the list. This message is addressed to my old email address; I
don't know why I was singled out for private tutoring lessons.

Strange. Does this sort of thing happen to other people on this list too? I get
these sorts of messages, usually privately, about once every two months.

mary lee wrote (on 08-03-02 21:11 -0800):
 
 To : Mr. Frank A.
 Chrishtoph[EMAIL PROTECTED]
 
 I am recently doing my assignment on Haskell at my
 college.  I am just new on Haskell programming. I
 would like to request if you can teach me for just few
 programming rules on Haskell.  It is my most
 appreciation to learn from you.
 
 My assignment title is INTERNATIONAL AIRLINES SYSTEM.
 
 May i know how to open file, put record entry and just
 read it from its database.  How to do a simple input
 and output for acquire user options using functional
 programming.  It is very important for me to learn
 Haskell in a proper way.  It will lead me to write a
 good documentation.  A good understanding on
 programming is good.  I am good in Visual Basic,
 Pascal, Cobol, C++ and also JAVA.  But facing
 functional programming is a real problem for me.  
 
 Now i have doing a simple Haskell programming on the
 way.  I just know how to import file, declare types
 and display output.
 
 Your help and guidelines is most appreciated.
 
 From,
 Lee Ling Ling

-- 
Frank Atanassow, Information  Computing Sciences, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-3261 Fax +31 (030) 251-3791

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Virus

2001-11-22 Thread Frank Atanassow

In case you haven't figured it out yet, the recent rash of posts by some of
our list members is symptomatic of a virus; Arjan van Ijzendoorn says it
is the Aliz virus. There is information about it here:

  http://www3.ca.com/virus/virus.asp?ID=10405

So if you're using Outlook, don't open anything suspicious.

BTW, the link above says that

  The worm does not install itself and contains no other payload.

so I guess it is sort of chain-letter worm, and relatively benign.

-- 
Frank Atanassow, Information  Computing Sciences, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-3261 Fax +31 (030) 251-379

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: functor

2001-10-30 Thread Frank Atanassow

Wolfgang Jeltsch wrote (on 29-10-01 23:43 +0100):
 you cannot use sections with types and (-). Furthermore the variable must 
 begin with a lowercase letter. So you have to write
 instance Functor (-) a where.

Erp, I said that the Functor class has arity *. Actually, it has arity 1
(i.e., it is a unary relation on types), with the argument being of kind
*-*. This is one of the infelicities of the class system.

-- 
Frank Atanassow, Information  Computing Sciences, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-3261 Fax +31 (030) 251-379

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: functor

2001-10-29 Thread Frank Atanassow

Wolfgang Jeltsch wrote (on 29-10-01 23:43 +0100):
 you cannot use sections with types and (-). Furthermore the variable must 
 begin with a lowercase letter. So you have to write
 instance Functor (-) a where.

Actually, you have to write:

  instance Functor ((-) a) where

since class Functor has arity *. This works because

  (-) a = (a -)

The outer parentheses are only there to ensure that the expression gets parsed
as one argument, and not two.

-- 
Frank Atanassow, Information  Computing Sciences, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-3261 Fax +31 (030) 251-379

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: proofs

2001-10-18 Thread Frank Atanassow

Richard wrote (on 17-10-01 10:20 -0700):
 I could teach myself to do it clumsily, but I want to learn from others.
 would learning category theory help me do this?  pointers to documents?
 proof-assistant software?

You might look at my page of online programming language theory texts,
particularly the book by Hennessy, Pitts' course material on it, Nielson 
Nielson's semantics book, Mike Gordon's notes on specification and
verification, and Pfenning's course notes on theorem proving and deduction.

  http://www.cs.uu.nl/~franka/ref.html

-- 
Frank Atanassow, Information  Computing Sciences, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-3261 Fax +31 (030) 251-379

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Newbie question about classes and class parameters

2001-10-17 Thread Frank Atanassow

Dmitry Astapov wrote (on 17-10-01 11:25 +0300):
 Let's say we have infamous Set class defined this way:
 
  class Set s a where
empty   :: s a
isEmpty :: s a - Bool
[..]
  instance Set IntegerSuperSet where
empty = ISS (SL [])
isEmpty (ISS (SL [])) = True
isEmpty _ = False
 
 gives me:
 Reading file /tmp/bug.lhs:
 ERROR /tmp/bug.lhs:38 - Wrong number of arguments for class Set
 
 Obviously I am missing something very important here. Could someone enlighten me?

First, the class declaration defines Set as having two parameters, s and a. In
an instance declaration, you must supply types for both. So:

instance Set IntegerSuperSet Integer where
  ...

would be correct, except for the second problem, which is that in Set s a, s
is actually type constructor (of kind * - *), while the argument which you
try to supply for s, namely IntegerSuperSet, is a type constant (of kind
*). So there is a kind mismatch. Try this instead:

  data SuperSet a = SuperSet (SetAsList a)

  instance Set SuperSet a where
...

-- 
Frank Atanassow, Information  Computing Sciences, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-3261 Fax +31 (030) 251-379

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



RE: Joy and Concatenative Programming

2001-09-25 Thread Frank Atanassow

[redirected to haskell-cafe]

 I just found out about a functional programming language called Joy (see
 http://www.latrobe.edu.au/philosophy/phimvt/joy.html).

 Joy differs from Haskell in that it has no variables.  Instead, all
 functions are postfix, taking a stack as their argument and returning a
 stack as a result.

Also, IIRC, Joy is untyped.

Programming in Joy is like programming with categorical combinators, but
ignoring the types. Indeed, a category with one object is just a monoid, and
this is supposedly the theory Joy is founded on.

But if you really think variables are a nuisance to programming, you can do
it in Haskell too, by using list combinators and defining a reverse
composition operator. Or just go program in CAML bytecode. It's pretty much
the same thing.

 Joy advocates contend that the elimination of variables and environments
 improves on functional languages in much the same way the elimination of
 state improved on imperative languages.

I think the situations are qualitatively different.

Saying that Joy programs lack variables is pretty close to saying that its
free variable environments are linearly ordered. Yet a Joy programmer can
reorder the arguments on the stack if he rewrites his program to take
account of that fact. So, though the ordering is really irrelevant, every
Joy program must pick one, and that can be understood as non-declarative,
since it has nothing to do with the result of the program. FP languages
factor out that arbitrary choice by treating free variable environments as
(multi-)sets. (Of course, you lose that advantage when you package up your
environment in a closure, because Haskell distinguishes between a - b - c
and b - a - c, even though they are isomorphic.)

---
Frank Atanassow, Information  Computing Sciences, Utrecht University
Padualaan 14, PO Box 80.089, 3508TB Utrecht, The Netherlands
Tel +31 (0)30 253-3261 Fax +31 (0)30 251-3791


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Application letters at the Haskell workshop: suggestion

2001-09-16 Thread Frank Atanassow

Marcin 'Qrczak' Kowalczyk wrote (on 16-09-01 09:30 +):
 Getting right descriptions of what was expected or unexpected is not
 trivial. For example when there is no separate lexer, we rarely have
 anything besides raw characters as unexpected. We have something
 more descriptive only if the grammar explicitly calls 'unexpected'
 after successfully parsing something. We really don't want to give
 a message like expecting 'e', found 'i' when the real cause is
 expecting 'then', found 'thickness'.

A bit off-topic, but after some experience using combinator parsers in Haskell
(not just Parsec) where the lexical and syntactical bits were done in the same
grammar, I concluded that the traditional separation between the two, a la Lex
and Yacc, does indeed have its merits for just this reason: by stratifying the
grammar you introduce an abstraction boundary which, I think, agrees better
with the way programmers, at least, have learned to reason about syntax. (And
that boundary is an ideal place to introduce cuts to prevent backtracking.)
IMO, the main advantage to combining the two stages is that you can use the
same formalism and, in the case of Haskell-style parsers, that you can
modularize the grammar into libraries; but viewing lexemes as non-terminals is
mostly a disadvantage.

More generally, one might imagine stratifying a large grammar even further, by
feeding the parser output to another parser. Traditionally we do this to
handle context-sensitive conditions because of limitations in Yacc-style
parser technology; for example, static analyzers and type checkers are usually
context-sensitive. But if your second-stage parser emits abstract syntax
trees, maybe you could have a third-stage parser which emits declaration
blocks or modules.

-- 
Frank Atanassow, Information  Computing Sciences, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-3261 Fax +31 (030) 251-379

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: The future of Haskell discussion

2001-09-13 Thread Frank Atanassow

Just a quick remark:

S. Alexander Jacobson wrote (on 13-09-01 12:40 -0400):
 As a general matter, the addendum process strikes me as confusing and
 dangerous.  I don't want to have a conversation like: I am using
 Haskell'98 with Addendum A, C, and E.  I'd rather say, I am using Haskell
 2001 and know that it is useful for developing web apps.

Eventually, you will be saying that, but about Haskell-2.

In the interim, having extensions described in addendums is probably better
than the situation we have now, in which you are forced to say that I am
using Haskell with extension X (but with whose semantics?) or I am using GHC
Haskell (but Hugs also supports my extension). At least if an extension is
described in a Haskell Report Addendum one knows where to look for its
semantics, that its semantics are standardized, and that it is reasonably
accepted by the community and not some Bizarro (I love that word :) extension
which will only ever be implemented in some obscure researcher's pet compiler
project.

-- 
Frank Atanassow, Information  Computing Sciences, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-3261 Fax +31 (030) 251-379

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: newbie conceptual question [from haskell list]

2001-07-26 Thread Frank Atanassow

matt heilige wrote:
 this brings up another issue that has, up to this point, not
 been mentioned... the well-understood (and theoretically guaranteed)
 properties of functional languages allow compilers/interpreters to do some
 much smarter things with functional constructs... this allows very
 abstract language features to be implemented much more efficiently. to
 return to the original question: why not just write in a functional style
 in an imperative language? the answer goes beyond the (not
 inconsiderable) convenience of having a syntax designed around the style.

Good point, and this reminds me of another point I wanted to mention.

You can replicate functional features in conventional languages, but you
lose another benefit of typed functional programming: machined-checked
documentation of your code.

When I write a program, I typically know much, much more about that program
than what is expressed in the source code I write. For example, in C I may
know that two procedures do not interfere, or that they are re-entrant, or
that a variable is never aliased. Sometimes knowing these facts is crucial
to maintaining the program. Then you have to make sure that it gets
communicated to the maintainer by stating it somewhere in the documentation.
And you have to make sure that what you're saying is actually correct, or
you end up causing even more confusion. So keeping accurate and up-to-date
documentation is a big part of the task of writing a C program. (I think for
C++ it's probably even more important because you need to define for each
method not only what it does, but what it should do in the future, so that
when it's overridden in a derived class, that contract is followed.)

By writing a program in a typed functional language, you are proving facts
about your program, facts which are encoded in an apparent manner in the
source code itself, and not some external documentation. You are proving
that disjoint programs do not interfere, that order of evaluation doesn't
matter, whether they (reduce to programs which) have effects, etc. There's
also safety, and theorems for free. Then there are other properties which
are obvious (to a programmer) in a Haskell program which get buried in the
equivalent C(++) program, e.g., that every member of a data structure is
traversed in a fold (no early exits). Many of these things hinge on the
typing of a program, which is inferred and checked for you, so there is less
of a burden on conscientious programmers.

These facts always get driven home to me when I program in C or even untyped
functional languages like Scheme. I always seem to end up writing down in
documentation, stuff that I would leave implicit in a typed language,
because its apparent from the source code, and/or automatically checkable by
merely compiling the source code.

---
Frank Atanassow, Information  Computing Sciences, Utrecht University
Padualaan 14, PO Box 80.089, 3508TB Utrecht, The Netherlands
Tel +31 (0)30 253-3261 Fax +31 (0)30 251-3791


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



RE: newbie conceptual question [from haskell list]

2001-07-26 Thread Frank Atanassow

D. Tweed wrote:
 Yes, I guess it's time for a confession: I'm making a rather sweeping
 assumption that the patterns in which I do and don't program are in some
 way `average' or `typical', even though they probably aren't. For
 instance, I don't even use patterns like `a[b++]=c;' just because it
 requires too many `mental cycles' for me to figure out what's going on
 when I reread the code; most other C programmers probably would use that
 construct, I expect. Having said that, within this sort of style I find
 that I don't really (have to) go beyond the idea of a machine with a store
 of named variables, some of which are actually arrays or have other
 structure, and tend to use the simple rule `if you assign/read outside
 array bounds or to a pointer location which does not map within a store
 defined variable/variable array, the program may (in a way which is so
 dependent on history as to be effectively non-determinisitic) either place
 gibberish in any other stored variable cell, return gibberish, return
 sensible values and have no observable ill effects or seg-fault'.

My reaction to that is: you are not programming in C. If you restrict
yourself to nice subsets of a programming language, then obviously your
programs will satisfy better properties. Viewed in this way, Haskell is a
(completely different) subset of C. But of course you lose the opportunity
for the compiler and standard tools (which includes the standard semantics)
to take advantage of these nicer properties, and are justified then in
complaining to the language designers, Hey, why did you put all this cruft
in there? I don't need it, and it makes it harder to use the language!

The point of having a language, IMO, is that its laws govern, _universally_,
all the programs you can write in it. Otherwise you would just say, Yeah, C
is really hard to reason about, but if you consider all my C programs
_individually_ the sublanguages they are written in actually satisfy nice
properties. So, in the limit you might specialize your `language'
differently to every single program you write. By that token, then, all
programming languages become equal, and we have reduced this discussion to
triviality.

I am not denying that you can have a nice imperative language (although I
think that `just' a global store is too unstructured to support good
metaproperties for reasoning). I think I even said something to that effect,
that it's not the paradigm which matters, but rather the existence of nice
properties, or effectively simple semantics, as you said. But C is not such
a language.

---
Frank Atanassow, Information  Computing Sciences, Utrecht University
Padualaan 14, PO Box 80.089, 3508TB Utrecht, The Netherlands
Tel +31 (0)30 253-3261 Fax +31 (0)30 251-3791


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Notation question

2001-05-29 Thread Frank Atanassow

Juan Carlos Arevalo Baeza wrote (on 28-05-01 18:02 -0700):
 
 Just a very naive question, because I'm really curious. I've seen in 
 previous messages here discussions about type systems using this kind of 
 notation:
 
   G |- f :: all x::S . T   G |- s :: S
 --
   G |- f s :: [s/x]T
 
 I'd never seen it before, and a few searches I've launched over the net 
 have turned up just a couple more examples of this notation, but without a 
 single reference to how it works or what it means, or even what it's 
 called, for that matter.
 
I'd appreciate any pointers.

May I also recommend my PL theory references page?:

  http://www.cs.uu.nl/people/franka/ref.html

BTW, here is the quick version:

  It says: Suppose term f has type all x::S.T in free variable environment
  G. Suppose further that term s has type S in environment G. Then the
  term f s has type [s/x]T (T with s substituted for x) in environment G.

But this in itself is really not very meaningful unless you assume some
standard definitions. For example, there should be rules for introducing
variables, introducing/eliminating all types, manipulating environments, a
definition for substitution, etc.

-- 
Frank Atanassow, Information  Computing Sciences, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-3261 Fax +31 (030) 251-379

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: The next step

2001-05-28 Thread Frank Atanassow

Simon Marlow wrote (on 28-05-01 10:17 +0100):
 It's not propaganda.  The fact is if any of the standard libraries use
 the LGPL, then some people will be prevented from using them.  That's
 the last thing we want, right?  Now you might argue from a moral
 standpoint that the companies that these people work for are basing
 their business models on intellectual property and therefore deserve
 everything they get, but we're not trying to do open source advocacy
 here, we're just trying to put together a set of libraries that everyone
 can use.

Some people don't see it that way. They would say that the GPL is the _less_
restrictive license because it ensures that everybody benefits (equally) from
everybody else's improvements. From that perspective, companies benefit also
since they may use and improve the code, provided they publish it. Will they
lose some potential revenue that way? Possibly. But as you suggested, the idea
is to make the audience as _wide_ as possible, not as _rich_ as possible. ;)

-- 
Frank Atanassow, Information  Computing Sciences, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-3261 Fax +31 (030) 251-379

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: The definition of lhsfun in the report

2001-05-10 Thread Frank Atanassow

 I'am confused about the funlhs production, in Report on the programming
 Language Haskell 98 of the 1st February 1999.

 In the report one of the funlhs-productions is (see page 127):

   funlhs - var apat {apat}

 That is a var followed by one to many apat. But you can have functions
 like this:

   fun :: Int
   fun = 17

This does not declare a function. If you look at the decl production, you
will see that there is another possibility, namely pat^0, which provides
this syntax.

OTOH it is possible to declare a function using pat, e.g.:

  fun = \x - x+1

In other words, funlhs is redundant.

---
Frank Atanassow, Information  Computing Sciences, Utrecht University
Padualaan 14, PO Box 80.089, 3508TB Utrecht, The Netherlands
Tel +31 (0)30 253-3261 Fax +31 (0)30 251-3791


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: help!!!!

2001-04-26 Thread Frank Atanassow

Cadena Silva Andres wrote:
 I need examples of trees please.

No problem!

Lemme see now...

There's oak, scarlet oak, bur oak, pin oak, northern red oak, shingle oak,
swamp white oak, pine, Scotch pine, Austrian pine, willow, white willow,
birch, redwood, mulberry, apple, cherry, black cherry, Oriental cherry, yew,
hazel, juniper, alder, ash, elm, holly, poplar, aspen, maple, hedge maple,
trident maple, Arnur maple, shagbark maple, sugar maple, paperbark maple,
Japanese maple, black maple, Norway maple, Crimson King Norway maple,
Tatarian maple, red maple, autumn flame red maple, october glory red maple,
silver maple, Beebee silver maple, Freeman maple, White Tigress maple,
rowan, hemlock, larch, cedar, spruce, white fir, Nikko fir, Douglas fir,
Nordmann fir, sequoia, yellow buckeye, Ohio buckeye, hickory, bitternut
hickory, shagbark hickory, northern catalpa, filbert, dogwood, hawthorn,
hornbeam, American hophornbeam, persimmon, silverbell, magnolia, Franklin,
lacebark, pear, sassafras, lilac, corktree, Ohio buckeye, red buckeye,
chestnut, horsechestnut, European horsechestnut, Brioti red horsechestnut,
serviceberry, walnut, black walnut, sweetgum, crabapple, Japanese Zelkova,
Japanese pagodatree, elder, boxelder, Mimosa, orange, Cockspur hawthorn,
Washington hawthorn, tulip, sourwood, palm, forest pansy redbud, maidenhair,
ginkgo, photinia, black locust, sunburst thornless honeylocust, cypress,
weeping nootka false cypress, and, last but not least, the charmingly named
American bladdernut.

You need any more, just holler, y'hear?

---
Frank Atanassow, Information  Computing Sciences, Utrecht University
Padualaan 14, PO Box 80.089, 3508TB Utrecht, The Netherlands
Tel +31 (0)30 253-3261 Fax +31 (0)30 251-3791


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: newbie

2001-03-12 Thread Frank Atanassow

Lars Henrik Mathiesen wrote (on 10-03-01 20:35 -):
 However, in some expositions of category theory, the usefulness of
 monads is justified because they 'belong' to a certain adjunction.

You can regard a monad as arising from a particular adjunction but, although
every adjunction determines a unique monad, the converse is not true. In fact,
the collection of resolutions for a monad forms a category with adjunctions as
objects and certain functors as arrows. The adjunction which gives rise to the
Kleisli category is initial in this category. The terminal object is called
the Eilenberg-Moore category and it has as objects M-algebras, like your `xi',
and as arrows M-algebra homomorphisms.

 In Haskell you can't express the adjunction, only the monad, which may
 take a little getting used to.

I've been looking at this recently to find some canonical way to express how
to `deconstruct' monadic terms (i.e., how to run them). The idea is that you
build up monadic terms in the Kleisli category, somehow describe a resolution,
then use the initiality property of the Kleisli category to map the terms to
the category of the resolution, where you can use the adjunction to
destructure them.

 But how about the related concept of an M-algebra? That is, a type T
 and a function 'xi :: M T - T' so that these laws hold:
xi . eta   === id
xi . (fmap xi) === xi . mu

If you reverse the sense of the last equation and regard the monad primitives
as constructors:

  xi (Eta x) = x
  xi (Mu m)  = xi (fmap xi m)

then this starts to look like a pattern-matching definition for folding a
monad regarded as an algebra. In fact, you can express the operators this way
in Haskell if you use are willing to use a nested datatype:

  data M a = Eta a | Mu (M (M a))

 Would it be useful to have functions that were polymorphic over
 List-algebras? (Not that I have any idea how that might be possible to
 express in Haskell).

I dunno if it is useful for List-algebras, but if you take your monad M as the
substitution monad generated by a term signature, then the the resolutions of
the monad can be regarded as a way of factoring M into a composition of
signatures which (I think) represent the values and redexes of the term
language. The Kleisli and E-M categories are extremal cases. In the Kleisli
category the redex functor is trivial; I think this is this is why you can use
it to pass around computations.

In the E-M category, the value functor is trivial, but I'm not sure what this
means precisely yet. For intermediate cases, you get a particular choice of
normal forms.  What I'm thinking is that an M-algebra for a language M gives
you a way of extending a denotational description of the normal forms to one
for the entire language, which is automatically sound for the equational
theory.

Which sounds useful to me for writing interpreters.

-- 
Frank Atanassow, Information  Computing Sciences, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-3261 Fax +31 (030) 251-379

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Proposal: module namespaces.

2001-02-27 Thread Frank Atanassow

I have two nitpicking comments.

Malcolm Wallace wrote (on 26-02-01 17:59 +):
   * The use of qualified imports becomes more verbose: for instance
 import qualified XmlParse
   ... XmlParse.element f ...
 becomes
 import qualified Text.Xml.Parse
   ... Text.Xml.Parse.element f ...
 However, I propose that every import have an implicit "as"
 clause to use as an abbreviation, so in
 import qualified Text.Xml.Parse   [ as Parse ]
 the clause "as Parse" would be implicit, unless overridden by the 
 programmer with her own "as" clause.  The implicit "as" clause
 always uses the final subdivision of the module name.  So for
 instance, either the fully-qualified or abbreviated-qualified names
 Text.Xml.Parse.element
 Parse.element
 would be accepted and have the same referent, but a partial
 qualification like
 Xml.Parse.element
 would not be accepted.

I don't like the implicit "as". The reason for having a tree structure for
names is that leaves are likely to collide. So I might use both
Text.ParserCombinators.UU and Text.PrettyPrinter.UU. In this case I might want
to use the declarations:

  import qualified Text.ParserCombinators.UU as PC
  import qualified Text.PrettyPrinter.UU as PP

Since a person is likely to use several packages in the same subtree quite
often, and in our goal of a "library-rich world" we expect a plethora of
implementations from disparate sources, I wonder whether the default "as" is
useful enough in practice. As an example, in cases where sibling modules
actually have the same interface and you want to write a client module which
can use either implementation interchangeably, you would always use an
explicit "as" anyway, since you want to write, say, "Tree.map" rather than
"AVL.map" or "RedBlack.map".

Besides, it is only a few more characters to make it explicit, and I think it
is better to avoid implicit behavior when possible.

Well, I don't care too much.

I care more about:

 A fuller proposed layout appears on the web at
 http://www.cs.york.ac.uk/fp/libraries/layout.html

I wish we could agree on capitalization of acronyms. On one hand, we have:

  Gtk, Jpeg, Html, Xml

but on the other:

  AVL, ODBC, FIFO, MD5, UI, PPM, FFI, IO, UU, PP, DSP, FFT, FIR, URL, CGI

Personally, I prefer the first group being normalized to uppercase rather
than vice versa, since "JPEG" and "HTML" look right, but "Url" and "Odbc" look
terribly wrong. (Unless you are Dutch, in which case maybe "Ui" looks good but
is still misleading. :)

Other miscellanea:

  * I think the top-level "Interface" is better named "Console", to contrast
with "Graphics".

  * I would prefer short names to long. So: "Text.Parse" rather than
"Text.ParserCombinators", "Data.Struct" rather than "Data.Structures",
"Graphics.Draw" rather than "Graphics.Drawing", etc. Generally, the
ancestors of a short name should give enough context to disambiguate it.

  * I would move "Format" out of "Graphics" and into "Data.Encoding". (But
maybe "Encoding" is intended to be a collection of things of `universal'
encodings, which clearly "Jpeg", for example, is not.)

  * Change "Data.Structures.Trees" and "...Graphs" from plural to
singular. Same for "Data.Encoding.Bits". But not "Data" to "Datum"! :)

  * Maybe change "Data.Structures" and "Data.Encoding" to one name each,
"DataStruct" and "DataEncoding" (or "Encoding" or "Codec"). The reason is
that it's not clear to me why they belong in the same subtree except for
the fact that in English both terms start with "Data". In other words, we
should try to group things semantically rather than lexically.

-- 
Frank Atanassow, Information  Computing Sciences, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-3261 Fax +31 (030) 251-379

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Learning Haskell and FP

2000-12-29 Thread Frank Atanassow

Benjamin L. Russell wrote (on 28-12-00 17:35 -0500):

   "Furuike ya!  Kawazu tobikomu mizu no oto."  --Matsuo Basho
  [..] Is it OK if I show off and steal some thunder? :)

So much for that idea...!

"(It's) An old pond! The sound of water steadily dripping in..."
 
 Actually, if I may add, the translation I remember was the following:
 
"[It's] An old pond!  The sound of water as the frog jumps in"
 
 "Kawazu" means "frog," and "tobikomu" means "(to) jump in."

That makes sense. I was guessing that "kawazu" was the old form of modern
"kawarazu" (`without changing'). Modern `frog' is "kaeru", though, and the
transitive form of "kawaru" (`change') is also "kaeru", so I suppose there is
some linguistic relationship. "tobikomu" makes much more sense this way too. I
thought it was a figurative usage, but it still didn't sound right...

-- 
Frank Atanassow, Information  Computing Sciences, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-3261 Fax +31 (030) 251-379

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Learning Haskell and FP

2000-12-28 Thread Frank Atanassow

i r thomas wrote (on 28-12-00 12:50 +1000):
 Unforunately, the " Gentle Introduction To Haskell" that haskell.org links to is not 
a very useful introduction.
 I am getting  more out of  Rex Paige's Two Dozen Short Lessons in Haskell. ( I am 
studying Haskell and C# on my own in my spare time as break from my medical practice 
). 

What did you find unuseful about GITH? How could it be improved? What were
your expectations for it? What was more useful about Rex Paige's notes?

 "Furuike ya!  Kawazu tobikomu mizu no oto."  --Matsuo Basho

 Translation please !

Is it OK if I show off and steal some thunder? :)

  "(It's) An old pond! The sound of water steadily dripping in..."

-- 
Frank Atanassow, Information  Computing Sciences, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-3261 Fax +31 (030) 251-379

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Origins of the G-Machine

2000-11-20 Thread Frank Atanassow

At the Sunday Times website, there is a very odd story about the recovery of
the Enigma machine stolen from Bletchley Park.

http://www.sunday-times.co.uk:80/news/pages/sti/2000/11/19/stinwenws02039.html

It includes this fun fact, which Haskellers may find of interest:

  Enigma machines vary and the official designation of this one is a Type G,
  or the G-machine; it was sometimes called the counter machine or
  "Zahlwerk-Enigma", because it has a letter counter.

Will this hinder the adoption of Haskell compiler technology in France and
Israel...?

-- 
Frank Atanassow, Information  Computing Sciences, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-3261 Fax +31 (030) 251-3791


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Literate Programming

2000-09-27 Thread Frank Atanassow

D. Tweed writes:
  I guess it's arguable; as mentioned I masacre loads of trees due to my
  habit of working on printed out source code and I could argue that:
  supposing I'm reading a program where == is defined for a particular
  instance of class with the comment that `note the care taken to ensure
  transitivity', then I may very well want to find out if == is actually
  used anywhere for this type _under the assumption that it is transitive_
  without having to wade through lots of == on different classes. (From
  memory, I believe there's no H98 requirement == be transitive, but this is
  just a contrived example anyway.) Just because type class methods _ought_
  to be semantically related doesn't mean they necessarily will be.

First, the `semantic relation' I was thinking of was the equivalence class of
types for each class method. In this sense they are always semantically
related.

Second, I guess you could try to do a conservative type analysis to pick up
method uses whose instance is statically determined (as is often the case with
arithmetic operators) but I don't think that is something that should be
treated in a literate programming tool. For one thing, it means that the
reader of the documentation needs to know the algorithm used for the type
analysis if he wants to use the indexing to make the sort of informed decision
you suggested (because he needs to know which method uses will and won't be
caught by the analysis).

Personally, I think literate programming is all about syntax, so I would avoid
trying to make the tool do any sort of semantic analysis.

OTOH, one useful thing might be for it to partition indices of identifers
according to provable-isomorphism classes of types. I think such a tool was
written for Ocaml, although it was interactive.

Here is a page on type isomorphisms:

  http://www.dmi.ens.fr/users/dicosmo/ResearchThemes/ISOS/ISOShomepage.html

At the very bottom of the page it even mentions something about Haskell source
code.

  Now I think about it, there's a problem even if you trust the programmer
  to ensure type class instance methods are semantically related: suppose
  I've got
  
  class Clothing where
wear x = 
  
  class Abraidable where
wear x = 
  
  I wouldn't want those two to be in the same index group (or is that what
  you meant by scoping -- I initially thought you meant things like let
  bindings?)

I had in mind both, and all other kinds of scoping as well, e.g., module
imports.

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791





RE: The importance and relevance of FP

2000-08-20 Thread Frank Atanassow

Sorry, I got carried away with my silly "hacker" post.

Manuel M. T. Chakravarty writes:
  Frank Atanassow [EMAIL PROTECTED] wrote,  Proposition  Hackers can like
  FP.  [..]   Proof 1:  By contradiction.
   
 Nothing could be more obscure or esoteric to a hacker than FP. (They
 even seem to admit it themselves.)
  
  I don't see the validity of this point.  Especially, given that much of
  todays hackerdom originated in the Lisp communities.  For example, when I
  talked with ESR about programming languages, he said that one of the things
  he misses in Python is fully-fledged lambda abstractions.

I was going to mention Stallman and Emacs; but by my definition Stallman was a
guru, not a hacker...

ESR and RMS may like LISP, but clearly they aren't "converts" in the same
sense as most of the list members here; Emacs and Guile aside, most of the
code they produce seems to be in conventional languages.

I should admit that I am not one of those people who feel we should be so
gung-ho about using different languages for different purposes, i.e., it may
be impractical to do otherwise now, but it's not a state of affairs we should
be perpetuating. The differences between languages within each paradigm
(imperative, FP, LP) are not very significant. Constructing a reasonably
"universal" language in each class is not that far outside our reach. And the
distinctions between each class are starting to break down too (witness
imperative monads). Even for very domain-specific languages, there are
substantial benefits to treating them in an internal fashion, and it's
becoming increasingly practical to do so (witness combinator-style libraries).

   Of course, many hacker sapiens are something less than open-minded. You
   only need to look at /. to convince yourself of that.
  
  Never confuse wannabees
  http://www.tuxedo.org/~esr/jargon/html/entry/wannabee.html with the real
  thing.

OK, different names, same denotation. My "guru" is your "hacker"; my "hacker"
is your "wannabee". Your hackers are still in the minority, and the
proposition is that something needs to be done about convincing the masses. I
was suggesting that if you hook the gurus, then the rest will follow.

   And if you _can't_ convince a guru and write better programs faster, then
   maybe FP isn't so good after all... unless you plan on writing all the
   world's programs yourself.
  [..]
  I think, the critical thing here is not outperforming existing
  applications, but saving time developing new applications.  If there is one
  thing a hacker hates, then it is wasting time on doing a job that could be
  done more efficiently[1] or repeatedly doing a job that could be automated.
  [..]
  [1] The exception is of course where doing the job is the
  end, not a means.

Not if the cost in efficiency is too high. The situation isn't that black and
white. I really do believe that the imperative camp's valuation of the
correctness vs. efficiency tradeoff is different.

rant
Also, I've observed many FP/LP people argue that slow or memory-hungry
applications aren't such a big deal because "in 5 years, Moore's law will make
them efficient."  The problem with this argument is that consumers and
programmers don't reason this way. They don't say to themselves, "Now I have a
shiny new computer, three times faster and with twice the memory of my old
one. NOW I can run functional programs!" :) They say, "Now I have a shiny new
computer, three times faster and with twice the memory of my old one. NOW I
can run twice as many applications three times as fast!" As machines get
better, program functionality also increases. "Perceived performance" seems to
remain fairly constant, if you see what I mean.
/rant

Rapid application development is not enough. There are imperative languages
for that already: Python, Perl, Tcl... They fit into the imperative mold, and
they seem to give imperative programmers the things they want. Probably Haskell
can provide equal benefits, but to convince people to change paradigms,
you need to provide quite substantial advantages _beyond_ that.

"Extraordinary claims require extraordinary evidence."

OK, enough ranting. All this is pretty incidental to Haskell, so, for the
benefit of others who have already been put off by the recent excess of
off-topic posts, I'll reply to any replies in e-mail, and you are all free to
sling mud at me, since I probably deserve it anyway...

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University Padualaan 14,
PO Box 80.089, 3508 TB Utrecht, Netherlands Tel +31 (030) 253-1012, Fax +31
(030) 251-3791





Re: The importance and relevance of FP

2000-08-19 Thread Frank Atanassow

Manuel M. T. Chakravarty writes:
  Florian Hars [EMAIL PROTECTED] wrote,
  
   To cite the comments of Olof Torgersson on Haskell
   (www.cs.chalmers.se/pub/users/oloft/Papers/wm96/wm96.html):
   
  As a consequence the programmer loses control of what is really
  going on and may need special tools like heap-profilers to find out
  why a program consumes memory in an unexpected way. To fix the
  behavior of programs the programmer may be forced to rewrite the
  declarative description of the problem in some way better suited to
  the particular underlying implementation. Thus, an important
  feature of declarative programming may be lost -- the programmer
  does not only have to be concerned with how a program is executed
  but has to understand a model that is difficult to understand and
  very different from the intuitive understanding of the program.
  
  What's the problem with using a heap profiler?  Sure, heap
  usage can get out of hand when writing a straight forward
  maximally declarative (whatever that means) version of a
  program.  Is this a problem?  No.

One might argue analagously that C/C++ programmers ought to use tools like
proof assistants and model checkers to help reason about the correctness of
their programs. The fact is that, when we talk about the operational behavior
or denotational semantics of a program, any reasoning "tool" which is _in_ the
language is preferable to a tool which is _outside_ the language.

Thus, when correctness is the more important, it is better to use Haskell, the
language constructs of which encourage (partial) correctness, than it is to
use a C + a verifier; and when efficiency is more important, it is better to
use C, the language constructs of which encourage efficiency, rather than
Haskell + profiling tools.

For me, correctness is almost always paramount, but clearly some people feel
differently, or we would not be so ubiquitously plagued with buggy free and
commercial software, hardly any of which is written in declarative languages.

  The trick about a language like Haskell is that you get to a
  working prototype of your program extremely quickly.  Then,
  you can improve performance and, eg, use a heap profiler to
  find the weak spots and rewrite them.  And yes, then, I
  usually lose the pure high-level view of my program in
  theses places in the code.

I would rather not lose it at all, by expressing the operational behavior
which I demand in a declarative fashion. This is, IMO, part of the reason why
notions such as monads (and linearity) are valuable: they give you some
measure of control over operational details without sacrificing the "pure
high-level view".

Of course, some fiddling is always needed...

  This is still a *lot* better then not having the high-level
  view at all and taking much longer to get the first working
  version of my program (even if it is already more efficient
  at that point).  I don't know about you, but I rather have
  90% of my program in a nice declarative style and rewrite
  10% later to make it efficient, then having a mess
  throughout 100% of the program.

A problem, as I see it, is that, although compilers like GHC do an admirable
job of optimizing away the overhead of laziness, etc. in many circumstances,
you can't be sure. You can use profiling tools and such after writing
my code, but sometimes the changes that need to be made can be far-reaching,
and you'll wish you had short-circuited this cycle in the first place.

A lot of people who program extensively in Haskell have a good
understanding of the underlying abstract machine, and can anticipate the
changes that need to be made, or at least ensure that they are
localized. For example, they know "tricks" like ways to increase laziness;
when I see messages like Florian's, I am reminded that such tricks can be
relatively obscure.

In short, the problem is that in a language such as Haskell without a
high-level abstract operational semantics, you don't see all the benefits of a
declarative style, since, as far as efficiency goes, we have only replaced the
conventional programmer's mental model of a traditional machine architecture,
with a more esoteric model which is some fuzzy idea of a lazy abstract
machine. The abstract machine may be simpler, but you still can't reason about
it in a formal way if it's not a part of the standard. The best you can do is
go off and read some papers, written by researchers for researchers, which
describe a lazy machine, and take a guess that your particular implementation
uses much the same thing. And hardly anyone will bother to go that far.

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791





RE: The importance and relevance of FP

2000-08-19 Thread Frank Atanassow

D. Tweed writes:
  However, the issue that lots of the simple  productive ideas
  from FP are culturally alien  even suspect to programmers in other
  languages is very true. I write lots of stuff in C++ and the fact that I
  have functions returning two results return a pairT,U rather than either
  a specially created class or a copy-into-reference-parameter, or that I
  use STL vectors/lists rather than writing inplace code using
  arrays/pointers elicits cries of `Yuck'  `Too weird'. And I agree that
  this is a real phenomenon/problem which may well lead to Haskell remaining
  a language that's scorned (in both senses) and runtime systems which make
  it difficult to do simple Haskell things.

[Written with tongue only halfway in cheek.]

I disagree. "They just don't get it" doesn't cut it anymore.

Proposition
  Hackers can like FP.

We give two proofs. First, a lemma.

Lemma
  Hackers like weird, obscure, esoteric, complicated stuff.
Proof:
  We exhibit a witness.

  Consider *NIX. Now there is a big sprawling mess of complicated
  interdependent gobbledygook if ever I saw one. And yet it attracts those
  critters like honey.

Back to our proposition.

Proof 1:
  By contradiction.

  Nothing could be more obscure or esoteric to a hacker than FP. (They even
  seem to admit it themselves.)

  Therefore, by the lemma, hackers can potentially like FP. QED.

Proof 2:
  Compared to *NIX, Haskell is cake. A simple functional language can
  be axiomatized in a few lines. There are concrete rules for reasoning about
  it. There are tangible implementations. That's more than enough.

  Therefore, hackers can like FP. QED.

Of course, many hacker sapiens are something less than open-minded. You only
need to look at /. to convince yourself of that. But their weakness is their
fawning admiration of hacker superioris, i.e., gurus and other gung-ho
gift-giving go-getters.

Now, a guru may not know much about adjunctions or omega-CPOs, but gurus are
typically rather open-minded (cause or effect?)---if it gets results, they'll
use it. So, to convert a guru, you need to show them some concrete results. So
write some useful applications, write them fast, write them well, show them to
a guru, and show the guru how to do it too. Then you'll convert the gurus;
convert gurus, and you'll convert their flock. Convert the flock and you're
halfway home. (Just look at the recent announcements about the GNOME
Foundation.)

[Musical interlude:

  You can't change the world
  But you can change the facts
  And when you change the facts
  You change points of view
  If you change points of view
  You may change a vote
  And when change a vote
  You may change the world.
-- Depeche Mode ("New Dress")
]

And if you _can't_ convince a guru and write better programs faster, then
maybe FP isn't so good after all... unless you plan on writing all the world's
programs yourself.

Unfortunately, I don't see many FP programs that are better or faster than
conventional ones. Those that are are incestuous: things like compilers and
program analyzers for FP languages. Hackers don't need those yet. They need
things like Manuel's Gtk interface, or Daan's DB interface. Then they need to
see them in action, outperforming existing applications in some way.

Of course, IANAH (I Am Not A Hacker) so you may take my hypocritical remarks
with a grain of salt or three.

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791





Re: The importance and relevance of FP

2000-08-17 Thread Frank Atanassow

Ralf Muschall writes:
   simplistic, binary distinction), then you have to decide where to draw the
   line between "functional languages" and other languages that may, to some
  
  I think it became impossible to draw that line since the inventors
  and maintainers of dysfunctional langauges learned that FP is cool
  and added closures etc. (Perl has them since 5.001m, Javascript
  since 1.2, just to mention a few).

First, you might well include first-order (algebraic) languages in your
definition of "functional".

Second, if your language has a semantics, it is not very hard to draw a
distinction: you just have to show an appropriate embedding of
lambda-calculus.

If your language has no semantics, then you are in for a world of trouble
anyway, and the question of whether your language is (higher-order) functional
will be the least of your worries.

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791





RE: Haskell jobs (fwd)

2000-07-19 Thread Frank Atanassow

S. Alexander Jacobson writes:
  Off the top of my head here are some Haskell specific things that we need:

  * HSP pages (like ASP or JSP or PHP)

Erik Meijer has done this. Can't find the preprint online, though. (Erik?)

  * in memory Haskell server analogous to JServ that talks to apache

mod_haskell? 

  http://losser.st-lab.cs.uu.nl:8080/

  * Haskell access to a pool of database connections

Daan Leijen's HaskellDB?

  http://haskell.cs.yale.edu/haskellDB/

  * Haskell access to Java classes

Erik's Lambada

  http://www.cs.uu.nl/people/erik/Lambada.html

  * Encapsulation of Haskell as Java classe

I don't know what that means, exactly. You mean a Hugs-like implementation in
Java? Not a bad idea... do you need that, though, now that GHC can produce
Java bytecode? Anyway, once the Java backend stabilizes, you would (in
principle, at least :) be able to cross-compile GHC into a Java binary, and
then use its upcoming interactive frontend. You still wouldn't have
programmatic hooks (i.e., a Java-level rather than console-level interface) to
the front-end, but it would become much easier to add.

  And all of this has to be relatively zipless and compatible with an
  existing JServ/JSP/Apache installation.

Eh? "zipless"?

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791





RE: Haskell jobs (fwd)

2000-07-19 Thread Frank Atanassow

Chris Angus writes:
  Aren't most of these "java additions" MS J++ or MS specific 
  rather than java/jdbc  "run-anywhere" though?

Not as far as I know, but maybe Erik and Daan will clarify.

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791





RE: Haskell jobs (fwd)

2000-07-19 Thread Frank Atanassow

Manuel M. T. Chakravarty writes:
  Frank Atanassow [EMAIL PROTECTED] wrote,
  
   Chris Angus writes:
 Aren't most of these "java additions" MS J++ or MS specific 
 rather than java/jdbc  "run-anywhere" though?
   
   Not as far as I know, but maybe Erik and Daan will clarify.
  
  HaskellDB is Win-specific as it is based on COM - at least
  that is what Daan told me last week.  (Otherwise, it looks
  super-cool.) 

There is a non-Windows distribution at

  http://haskell.cs.yale.edu/haskellDB/download.html

so I thought it supported Unix, but the page also says, "no driver is yet
available for these platforms but it should be quite easy to create a general
ODBC driver." Maybe it can be extended to work with NDBM/GDBM/MySQL?

(Sorry, I am a DB moron.)

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791





Eifskell

2000-07-04 Thread Frank Atanassow

Ronald J. Legere writes:
  
   While perusing the newsgroups, I came across a post (of the same subject
  as above, if you want to find it on deja news)  by Bertrand Meyer (Eiffel)
  suggesting "Haskell (http://www.haskell.org) shouldn't really be a
  separate programming language, but rather an Eiffel library." and that
  someone should write it. I think this is driven by the recent addition of
  closure like 'agents' (http://www.eiffel.com/doc/manuals/language/agent/).  
  in eiffel (including anonymous functions). It is not clear what this
  library should do (lazy evaluation maybe?) but I was quite fascinated to
  see that Meyer is learning the joys of haskell :)

I read the discussion, and it sounded to me more like an expression of
contempt on Meyer's part ("_shouldn't_ really be a separate language").

I only know the basics of Eiffel, but it seems unlikely that a significant
part of Haskell could be encapsulated as an Eiffel library. Meyer suggested
using (what we know as) closures to encode lazy evaluation (looked like a
call-by-name CPS translation), but someone rightly shot this down in a reply.

Anyway, the type system is at least as (more, IMO) important to Haskell as its
evaluation regime, and I find it hard to imagine a reasonable embedding of it
in Eiffel's. It seems far easier to do the reverse!

BTW, the most interesting thing I discovered was that DejaNews hosts a forum
which mirrors the Haskell list. It's called "fa.haskell".

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791





Re: More on Quantum vectors...

2000-06-07 Thread Frank Atanassow

I cannot resist replying to this remark:

Jan Skibinski writes:
   Now, someone, somewhere could have written a paper
   "Isolation properties of sandwiched materials", but
   how on earth he/she would ever invented something
   of this sort in the first place or - granted that -
   how could he/she ever appreciate an importance
   of this little problem?

I would not be surprised to find this article appearing in the next Scientific
American.

Consider these gems:

  "Finger-Length Ratios and Sexual Orientation," Terrance J. Williams,
  Michelle E.  Pepitone, Scott E. Christensen, Bradley M. Cooke, Andrew
  D. Huberman, Nicholas J.  Breedlove, Tessa J. Breedlove, Cynthia L. Jordan,
  and S. Marc Breedlove, Nature, vol.  404, no. 6777, March 30, 2000,
  pp. 455-6. [This yielded what the authors call "some surprising
  information." From it they concluded that, statistically, it is possible to
  ascertain people's sexual orientation simply by knowing the ratio of each
  individual's finger lengths.]

  "Why are Toads Right-Handed?" Nature, T. Naitoh and R.J. Wassersug,
  vol. 380, 1996, pp. 30-1. [Sims, Andrews, and Young explore the method by
  which certain fish "remove noxious material from the stomach." Called "full
  gastric eversion," this consists of turning the stomach inside out and
  draping it through the mouth. Wassersug explains that, "for the record, the
  serious scientific answer is that frogs have to do this task of stomach
  wiping with their right hand simply because the upchucked stomach always
  hangs out of the right side of the mouth."]

  "Effect of Ale, Garlic, and Soured Cream on the Appetite of Leeches," Anders
  Barheim and Hogne Sandvik, "British Medical Journal," vol. 309, Dec 24-31,
  1994, p. 1689. [This article won the Ig Nobel Prize.]

  "Bed Rest: A Potentially Harmful Treatment Needing More Careful Evaluation,"
  Chris Allen, Paul Glaszious, and Chris Del Mar, The Lancet, vol. 354,
  October 9 1999, p. 1229. [The authors conclude that bed rest by itself
  doesn't necessarily help you get well, especially if you are ill.]

  "Pigeons' Discrimination of Paintings by Monet and Picasso," Shigeru
  Watanabe, Junko Sakamoto, and Masumi Wakita, "Journal of the Experimental
  Analysis of Behavior," vol. 63, 1995, pp. 165-174. [Another Ig Nobel Prize
  winner, "for their success in training pigeons to discriminate between the
  paintings of Picasso and those of Monet."]

So, no matter what your problem, rest assured that, sometime, somewhere,
somewhy, a scientist has probably already reported on it.

[These citations are courtesy of the Annals of Improbable Research,
http://www.improbable.com.]

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791





Re: More on Quantum vectors...

2000-06-06 Thread Frank Atanassow

Jan Skibinski writes:
   Well, Frank, there are zillions of papers and books on 
   mathematical background of QM and I certainly would not add
   anything of a consequence here. A theoretical foundation of
   Quantum Mechanics is the Hilbert space. Dirac's formalism
   is a neat notation for that space and the physicists like it,
   but you could use any other notation for that matter.

I am aware of the many books on QM. However, I would not expect a CS person to
read a whole book just to read a paper which has to do with QM.

Sure, it's a rehash of material available elsewhere, but it only needs to
touch the points which are relevant for your paper.

Conversely, if I were going to write a paper to do with lambda-calculus but
which is aimed at an audience of physicists, I would include a little
background in the paper rather than just refer them to Barendregt's book. (Of
course, a solid bibliography is also necessary.)

   If you are asking for a paper on application of
   QM that's again uncountable a task.

No, I'm not.

   But if you are asking for a paper on QM vs. FP - that's
   another story. Jerzy sent one pointer in his last post.
   I hope more papers of this kind will appear in the future.

I'm asking for (er, rather, trying to encourage you to write) a paper on your
QM modules, and your perspective on QM vs. FP as well. (I'm going to look at
Jerzy's paper next.)

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791





More on Quantum vectors...

2000-06-05 Thread Frank Atanassow

Jerzy Karczmarczuk writes:
  ...although apparently there are exactly two readers/writers
  of this thread on this list. Oh, well, it is as boring as any 
  other subject.

I'm reading it. I think this field of application could be very
interesting. Jan, could you write up a paper on it, with enough of the
mathematical background for non-physicist CS people to grok it?

And maybe Jerzy could write up something which elaborates this remark:

  I confess that I became interested in Haskell *because* of its possible
  applications to scientific computing, and *in particular* to quantum
  physics. (And some statistical physics; the underlying math is very
  similar, and this is not accidental).
  
  
  Mind you, this is a domain where you see immediately the necessity of
  computing using higher-order functions!
  
  Your states are functions. Your mathematical objects are functions. Your
  physical quantities (observables) are functions acting on states.
  
  Most problems in QM cannot be solved without using perturbation methods.
  The perturbation formulae are usually very tedious to implement, unless 
  one dares to use some lazy coding.
  
  Then you can economize a few days of pencil work, and you can spend this
  time rolling on the ground and laughing at the people who claim that
  Haskell is useless for practical computations, because they don't know
  how to implement some middle-Chinese chess in it.

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791





Re: more detailed explanation about forall in Haskell

2000-05-19 Thread Frank Atanassow
 claim it is
inconsistent, then you will need to show that set theory is inconsistent. Good
luck.

  The forall's used with domains, say e.g.   N  is the set of  integers are in
  fact bounded forall's and
  one has e.g.
  
  forall x in N . prime(x) ==  forall x.  x in N = prime(x)
  
  the left forall above is an example of a bounded forall where x must range
  over a constant set , here N.

You have jumped from the meta-level to the object-level. We were discussing a
term logic with simple parametric quantification. This is irrelevant. You can
try to incorporate increasingly large parts of your metatheory into the object
theory, but it will never become a closed system.

  However more complicated bounded forall's can be considered.
  In general let alpha(x) be a formula about a variable x then
  
  [forall alpha(x) . prime(x)]  == [forall x. alpha(x) = prime(x)]

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791





Re: more detailed explanation about forall in Haskell

2000-05-19 Thread Frank Atanassow

There is a good introduction by Cardelli and Wegner:

  Luca Cardelli and Peter Wegner. On understanding types, data abstraction,
  and polymorphism. Computing Surveys, 17(4):471-522, 1985.

available from Cardelli's page at

  http://research.microsoft.com/Users/luca/Papers/OnUnderstanding.A4.ps
  http://research.microsoft.com/Users/luca/Papers/OnUnderstanding.{US,A4}.{ps.pdf}

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791





Re: more detailed explanation about forall in Haskell

2000-05-18 Thread Frank Atanassow

Jan Brosius writes:
  I must put this in the good way;
  
  [forall x . alpha(x)] = alpha(x) is True

Yes, by instantiation.

  If alpha(x) is TRUE then the following is true : alpha(x) = [forall x.
  alpha(x)]

No, alpha(x) only asserts that some element named x satisfies alpha. It does
not follow that therefore every other element also satisfies alpha.

  So if alpha(x) is TRUE then [forall x. alpha(x) ]= alpha(x)

This does not follow. It is truth-equivalent to:

  [forall x. alpha(x)] = True

which is equivalent to:

  forall x. alpha(x)

which is only true when every element satisifies alpha; so it is not a
tautology.

I repeat: you do not understand the difference between bound and free
variables. A free variable behaves like a constant. It is not "implicitly"
quantified in any way, because it denotes a specific element. The only
difference between a constant and a free variable is syntactic: a free
variable can be bound in an outer scope, while a constant cannot.

   The sentence "alpha(x)" asserts that there is a _distinguished_ element
   named
  
  NO , saying that there is a distinguished element T that satisfies alpha
  implies
  
  that exists x. alpha(x) is true

Yes, it also implies this fact, because it can be derived by extensionality:

  alpha(x) = exists y. alpha(y)

However, existential quantification hides the identity of the element in
question. The fact that we know _which_ element it is that satisifies alpha
permits us to say more. This is why:

  exists x. alpha(x),
  alpha(x),

and

  forall x. alpha(x)

are not truth-equivalent. Rather we have:

  forall y. alpha(y) = alpha(x)   and   alpha (x) = exists z. alpha(z)

   "x" in the domain of your model, and that that element, at least,
  satisfies
  
  x is a variable ; no domain ; no model

A model must assign an element in the domain to each free variable. You need a
model to determine the truth-value of a first-order proposition which is not
tautological. We are trying to establish the truth-value of a proposition with
a free variable; therefore we need a model. A model needs a domain of elements
to draw from. Therefore we need a domain. OK?

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University Padualaan 14,
PO Box 80.089, 3508 TB Utrecht, Netherlands Tel +31 (030) 253-1012, Fax +31
(030) 251-3791





Re: Unicode

2000-05-17 Thread Frank Atanassow

Manuel M. T. Chakravarty writes:
  The problem with restricting youself to the Jouyou-Kanji is
  that you have a hard time with names (of persons and
  places).  Many exotic and otherwise unused Kanji are used in
  names (for historical reasons) and as the Kanji
  representation of a name is the official identifier, it is
  rather bad form to write a person's name in Kana (the
  phonetic alphabets).

You're absolutely right. This fact slipped my mind.

Still, probably 85% (just a guess) of Japanese names can be written with
Jyouyou kanji, and the CJK set in Unicode is a strict superset of the Jyouyou,
so there are actually more kanji available, and the problem is not quite so
severe. However, for Chinese names I can imagine it being quite restrictive.

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791





Fw: more detailed explanation about forall in Haskell

2000-05-17 Thread Frank Atanassow

Jan Brosius writes:
   Why do some computer scientists have such problems with the good logical
   forall
   and exist.  Remember that good old logic came first.
   On it was build SET theory.
   On it was built topological space
  
   To prove some theorem in lambda calculus one used a topological model.
  
   You see : good old logic came FIRST  afterwards came theorems of typed
   lambda calculus .
   This is not the sophistic question : what came first : the egg or the
   chicken.
  
   NO good old logic came first .

Your argument is absurd and irrelevant.

 SORRY,  this is quite TRUE , in fact  [forall x. alpha(x)]  = alpha(x)

 the above true equivalence seems to be easily considered as wrong . Why?
 Because alpha(x)  is TRUE can be read as  alpha(x) is TRUE for ANY x.

I think this is one of the roots of your misconceptions. These two
propositions are certainly not equivalent. Maybe it is because you have been
told that, in Haskell, syntax we typically elide the "forall" quantifier.

The sentence "alpha(x)" asserts that there is a _distinguished_ element named
"x" in the domain of your model, and that that element, at least, satisfies
"alpha". The sentence "forall x. alpha(x)", OTOH, asserts that _any_ element
in your domain satisifies "alpha". So if your domain is a set D, then a model
of "alpha(x)" needs a subset C of D to interpret "alpha", along with a member
X of C to interpret "x", and the sentence is true iff X is in C. OTOH, a model
of "forall x. alpha(x)" needs only the subset C, and is true iff C=D, since it
asserts that for any element Y of D, Y is in C.

In Haskell the situation is complicated by the fact that there are no
"set-theoretic" models (are you even aware that Haskell's type system is
unsound?), and the fact that the domain is multi-sorted. But those facts do
not bear on the distinction between the two terms on either side of the
equivalence above.

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791





Fw: more detailed explanation about forall in Haskell

2000-05-17 Thread Frank Atanassow

Frank Atanassow writes:
  Jan Brosius writes:
 Why do some computer scientists have such problems with the good logical
 forall
 and exist.  Remember that good old logic came first.
 On it was build SET theory.
 On it was built topological space

 To prove some theorem in lambda calculus one used a topological model.

 You see : good old logic came FIRST  afterwards came theorems of typed
 lambda calculus .
 This is not the sophistic question : what came first : the egg or the
 chicken.

 NO good old logic came first .
  
  Your argument is absurd and irrelevant.

I take it back. There is no argument here, only the childish insinuation that
"my daddy can beat up your daddy".

So there. blfffft!

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791





Re: Unicode

2000-05-16 Thread Frank Atanassow

George Russell writes:
  Marcin 'Qrczak' Kowalczyk wrote:
   As for the language standard: I hope that Char will be allowed or
   required to have =30 bits instead of current 16; but never more than
   Int, to be able to use ord and chr safely.
  Er does it have to?  The Java Virtual Machine implements Unicode with
  16 bits.  (OK, so I suppose that means it can't cope with Korean or Chinese.)

Just to set the record straight:

Many CJK (Chinese-Japanese-Korean) characters are encodable in 16 bits. I am
not so familiar with the Chinese or Korean situations, but in Japan there is a
nationally standardized subset of about 2000 characters called the Jyouyou
("often-used") kanji, which newspapers and most printed books are mostly
supposed to respect. These are all strictly contained in the 16-bit space. One
only needs the additional 16-bits for foreign characters (say, Chinese), older
literary works and such-like. Even then, since Japanese has two phoenetic
alphabets as well, and you can usually substitute phoenetic characters in the
place of non-Jyouyou kanji---in fact, since these kanji are considered
difficult, one often _does_ supplement the ideographic representation with a
phoenetic one. Of course, using only phoenetic characters in such cases would
look unprofessional in some contexts, and it forces the reader to guess at
which word was meant...

For Korean and especially Chinese, the situation is not so pat. Korean's
phoenetic alphabet is of course wholly contained within the 16 bit space, but
Chinese, as a rule, don't use phoenetic characters. Koreans rely on their
phoenetic alphabet more than the Japanese, but they still tend to use (I
believe) more esoteric Chinese ideographic characters than the Japanese
do. And the Chinese have a much larger set of ideographic characters in common
use than either of the other two. I'm not sure what percentage is contained in
the 16-bit space; it's probably enough that you can communicate most
non-specialized subjects fairly comfortably, but it is safe to say that the
Chinese would be the first to demand more encoding space.

In summary, 16 bits is enough to encode most modern texts if you don't mind
fudging a bit, but for high-quality productions, historical and/or specialized
texts, CJK users will want 32 bits.

Of course, you can always come up with specialized schemes involving stateful
encodings and/or "block-swapping" (using the Unicode private-use areas, for
example), but then, that subverts the purpose of Unicode.

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791





Re: more detailed explanation about forall in Haskell

2000-05-12 Thread Frank Atanassow

Claus Reinke writes:
[nice exposition of C-H correspondence  state threads]

 The main caveat is the insistence on constructive proofs (in the classical
  logic most of us tend to learn, there is this favourite proof technique: if
  I assume that xs don't exist, I am led to a contradiction, so xs have to
  exist, even if I haven't seen any of them yet - this is not acceptable in
  constructive logics).
  
  [haven't read the papers on a correspondence for classical logic yet, but
  I assume they exist, for otherwise I would contradict Frank Atanassow ;-]

Here's a nice example, which you alluded to.  Reading | as `or' and ~ as
`not', a|~a is a theorem of classical logic but not intuitionistic logic. By
definition, ~a = a-_|_ (falsum). A proof of this proposition is given by the
term

  /\ a - \\(m::a|n::a-Void) - [n] (\x - [m] x)-- /\ is big-lambda

"[m] t" and "\\m::a - t" are new term forms. They throw and catch
continuations, where a continuation accepting values of type a is something of
type a - Void. Void is the empty type, so this means that a continuation is a
something like a function which never returns.

"[m] t" takes a term t :: a, and yields a term of type Void with a fresh free
variable m :: a-Void. You can think of [m] t as meaning, "throw value t at
continuation m". When this gets reduced, the current context is discarded and
execution proceeds from m, with t as input.

"\\" is usually written with Greek letter `mu'. In "\\m::a - t", the term t
must be of type Void and possess a free variable m :: a-Void; the result is a
term of type a, in which m is now bound. You can think of "\\m::a - t" as
meaning, "catch a value v thrown to continuation m and return v as the
result". Note that since t has type Void, it must always throw something and
can never return normally. (In case of conflicts, which value gets caught
depends of course on the reduction order.)

"\\(m::a|n::b) - t" is a pattern-matching variation on the mu-binder. t is
again of type Void, but the result is of type a|b. The meaning is that it
catches values thrown to either continuation, injects it into the sum, and
then returns it. (There is also variant "[m|n] t" of the "[m] t" syntax, but
we don't need it.)

So what does our term

  /\a - \\(m::a|n::a-Void) - [n] (\x - [m] x)

mean? Well, when it gets reduced, it remembers its calling context and
associates it with m and n. Then it initially returns (by throwing it at n) to
that context the closure (\x-[m]x)::a-Void which gets injected into the
right summand. Execution proceeds, and if at any point this closure ever gets
applied to some value v, then the original context is magically restored, but
this time with v injected into the left summand. So this is an example of the
time-travelling effect you get with multiply-invoked continuations (because we
can consider that there is only one continuation of type a|a-Void.)

Incidentally, the reason you need a special form for continuation
"application" is that I glossed over a technical detail concerning whether to
take ~a or a-Void as primitive. At least in the lambda-mu calculus, you're
not allowed, actually, to write "f x" if f::A-Void for any A; you have to use
"[f] x". I forget the details.

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791





Re: more detailed explanation about forall in Haskell

2000-05-11 Thread Frank Atanassow

Thorsten Altenkirch writes:
  Jan Brosius writes:
Marcin Kowalczyk wrote at Wed, May 10, 2000 7:54 PM :
  2. Next let me point out once and for all that
  logical quantifiers are used only in logical formula's .

 Types can be treated as logical formulas, according to the Curry-Howard
 isomorphism.

Sorry, never heard of in logic. But perhaps you can explain.
  
  [I hope it is ok if I reply, although I am sure Marcin can defend
  himself just as well]
  
  Maybe because you have only learnt about classical logic, which is a
  shame especially for a computer scientist. Have a look at a standard
  text book, like Troestra  van Dalen's "Intuitionism I,II".

BTW there is a C-H correspondence for classical logic too, although it
requires a constructive presentation of the rules and considerable care with
reduction laws. The trick is to model the type of a continuation as a negated
proposition, with reductio ad absurdum as a call/cc-like operation.

See for example:

  C.-H. L. Ong, A semantic view of classical proofs: type-theoretic,
  categorical, denotational characterizations. In Proceedings of 11th IEEE
  Annual Symposium on Logic in Computer Science, IEEE Computer Society Press,
  pp. 230-241, 1996.

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791





Re: When is it safe to cheat?

2000-05-09 Thread Frank Atanassow

Jan Skibinski writes:
   Good point. Short of reading some truly random device
   (perhaps ambient temperature fluctuation) this can be always
   theoretically defeated. I can only make life more difficult
   to the attacker by trying to outsmart him algoritmically
   (Or to confuse him. My clock is always several hours too late
or too early. Just kidding)

   Any good idea? First prize: a bottle of something good. :-)

There is a thing known as an Entropy Gathering Demon (EGD). 

From http://www.lothar.com/tech/crypto/ :

One of the nice features of the Linux kernel is the /dev/random
device. This is a little character device that gives you random
numbers when you read it. In a variety of places scattered
throughout the kernel, certain interrupts (network packets arriving,
keyboard hits, mouse movement) cause a timestamp and some event
information to be hashed into an "entropy pool". The pool, perhaps
4k in size, always contains very random data, but as bits are
"stirred" in, a counter is incremented to reflect the fact that the
poll is now even more random than before. When you read from
/dev/random, you get a hashed portion of the pool, and the counter
is decremented. This gives you high quality cryptographically strong
random data.

...

EGD is an Entropy Gathering Daemon meant to be used on systems that
can run GPG* but which don't have this convenient source of random
bits. It is a regular user-space program that sits around, running
programs like 'w' and 'last' and 'vmstat', collecting the randomness
(or at least the unpredictability) inherent in the output of these
system statistics programs when used on a reasonably busy system. It
slowly stirs the output of these gathering programs into a pool of
entropy, much like the linux kernel device, and allows other programs
to read out random bits from this pool.

* GPG = GNU Privacy Guard

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791





When is it safe to cheat?

2000-04-28 Thread Frank Atanassow

Jan Skibinski writes:
   When can I safely cheat haskell compiler/interpreter
   by pretending that I perform pure computations,
   when in fact they are not? Here is a real example,
   from my Md5Digest module which works fine in Hugs:

I don't understand what is impure about the MD5 example, but the time example
is clearly state-dependant. I think the bottom line is that unsafePerformIO
has no semantics beside the fact that it _forgets_ the effectful semantics of
the inner expression, and since we don't have an operational semantics for
Haskell, you can in principle expect any "bad" use of unsafePerformIO to fail.

For example, even if you try to suspend the evaluation by guarding the
expression with a (), as Nigel explained, a smart compiler could recognize
that a function of type () - a is denotationally equivalent to a constant
of type a.

So what you are really doing in these cases is trying to outsmart the
compiler('s designers), which is IMO a pointless exercise. (Think: "the
compiler as a black box".)

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791





Derived class problem

2000-04-27 Thread Frank Atanassow

Mike Jones writes:
  I am having a problem with a derived class. I define:
  
  class (Monad m) = InstrumentMonad m where
   yuck :: a - m a
  
  Then I define:
  
  instance InstrumentMonad Vi where  (Line 30)
   return a = Vi (\s - (s, a))
   Vi sf0 = f =
   Vi $ \s0 - 
   let
   (s1, a1) = sf0 s0
   Vi sf1 = f a1
   (s2, a2) = sf1 s1
   in (s2, a2)
  
  And when I compile, I get the error:
  
  Vi.hs:30:
  No instance for `Monad Vi'
  arising from an instance declaration at Vi.hs:30
  
  Vi.hs:31: Class `InstrumentMonad' does not have a method `return'
  
  Vi.hs:32: Class `InstrumentMonad' does not have a method `='

You need to define the methods for class Monad (return, =) in an instance
for class Monad, and the methods for class InstrumentMonad (yuck) in an
instance for class InstrumentMonad.

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791





Use of irrefutable

2000-04-20 Thread Frank Atanassow

Mike Jones writes:
  I am looking at the Hawk Signal module, where the following definition
  occurs:
  
  lift1 f (List xs) = List $ lazyMap f xs
   where
   lazyMap f ~(x:xs) = f x :  lazyMap f xs
  
  Now setting aside how the function is used in Hawk, I ran a little
  experiment to see what happens when the irrefutable definition is removed by
  calling it with:
  
  a = List [1, 2]
  b = lift1 (+ 1) a
  
  Now without it I get an error for a "non-exhaustive pattern". With it, I get
  an "irrefutable pattern failed".

Because you used a finite list as input. lazyMap only matches the cons case,
but a finite list always has a nil case. The error messages may be different,
but the problem is the same. Try "take 2 $ lift1 (+ 1) (cycle a)".

  Can some one explain to me the advantages and disadvantages of using
  irrefutable matching, including how irrefutable matching is used in general?
  Why and when it is used, etc.

I'm pretty sketchy on this too.

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791





Re: string to Integer

2000-04-07 Thread Frank Atanassow

Yuichi Tsuchimoto writes:
   Or look at o's and flippo's types:
   
(.)  :: ((a - b) - (c - a)) - (c - b)
flip (.) :: ((a - b) - (b - c)) - (a - c)
   
   Surely the second one is much cooler!
  
  Yes, indeed!
  
  Then, the question is why we write
result = function operand1 operand2
  instead of
operand1 operand2 function = result

As a question of notation, I think the difference is that you use the
diagrammatic notation (flip (.)) when you want to emphasize the process of
computing something (buzzword, "imperative"). If you read left-to-right then
you can see each stage of a transformation, in the order which it "logically"
occurs. On the other hand, the (.)-style notation emphasizes the declarative
viewpoint since, again reading left-to-right, you start with what you want
and refine down to what you're starting with.

In category theory one often writes commutative arrow diagrams to express
systems of equations. If you use the diagrammatic notation, it can be simpler
to follow paths in the diagram because, by convention, one prefers right- and
down-pointing arrows over left- or up-pointing ones.

If Haskell 98 had user-definable infix type expressions (and - wasn't part of
the syntax already), you could define the transpose of (-)

  type b - a = a - b

and then write the signature for (.) as follows:

  (c - a) - (c - b) - (b - a)

Using - in type signatures has the advantage that the first thing you see in
a signature is what is produced, rather than what is necessary to produce,
which is sometimes what you want when you have a set of algebraic functions
like John Hughes' pretty-printing library:

 text  :: Doc - String
 (+) :: Doc - Doc - Doc

However it does not work so nicely in Haskell since by convention we curry
everything, so the order of arguments is also reversed. If we used uncurried
functions more often the signature for cons

  cons :: List a - List a - a

would be more intuitive:

  cons :: List a - (a, List a)

(Incidentally, I think Roland Backhouse made this argument, i.e., that we
should prefer (-) to (-), although he was working with a relational calculus
rather than a functional one.)

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791





Re: string to Integer

2000-04-07 Thread Frank Atanassow

Frank Atanassow writes:
  Using - in type signatures has the advantage that the first thing you see in
  a signature is what is produced, rather than what is necessary to produce,
  which is sometimes what you want when you have a set of algebraic functions
  like John Hughes' pretty-printing library:
  
   text  :: Doc - String
   (+) :: Doc - Doc - Doc

On re-reading this I see my point was not so clear. What I wanted to indicate
is that the functions of an algebra have a common codomain, like Doc, so
putting it first in a signature emphasizes the commonality between
them. Combinator languages and monads (the extra operations are generally
typed as X - M Y, for a monad M) are pretty common in Haskell, so by that
token (-) might be preferable to (-).

OTOH, if we used coalgebras more heavily in Haskell we could make the opposite
case, that (-) is preferable, since coalgebras have a common domain.

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791





Re: == and hyperstrictness

2000-03-22 Thread Frank Atanassow

  Strictness only improves efficiency and narrows cases when a function is
  defined, but it can never improve correctness. There is no code that
  requires strictness to work at all.
  
  Unless we use extensions like GHC's Exception or unsafePerformIO.  Or use
  hGetContents and want to explicitly close a file to work around limits of
  concurrently open files, or to write to that file, or to use
  Posix.forkProcess.

I think there are cases where strictness is a condition of correctness. It
depends on whether or not you are using bottom to model an element of your
intended semantic domain. Most programs don't: they only use bottom for
operational convenience, for laziness, say, or to model non-termination or
errors when the program is incorrect. But some programs make essential use of
bottom in a denotational way, and then functions defined on the type in
question are required to be strict.

I admit I can't think of any just now, though... :) Maybe someone else can
think of an example?

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University Padualaan 14,
PO Box 80.089, 3508 TB Utrecht, Netherlands Tel +31 (030) 253-1012, Fax +31
(030) 251-3791





Re: HaskellDoc?

2000-03-22 Thread Frank Atanassow

Jan Skibinski writes:
   How come ISE Eiffel tools can handle all of this so
   nicely from a clean ascii, readable source code? As far
   as I can remember the only ugly looking comment line sits
   somewhere at the top and says something like this:
   "Document: $blaha $Date...". But the rest is pretty
   -- as it supposed to be.
  
   There are plenty of views that are given to Eiffel
   users as his/her choice. And all of them are produced
   from the same readable source code on the fly. So-called
   short form which ignores inheritance? Here you go!
   Flat form, fully blown API with inheritance. Flat-short
   form. Extracts of description of classes. Cross references.
   Lists of heirs, lists of parents. 

Could you give us a link to a description of this mechanism? I looked through
www.eiffel.com but could only find more general descriptions of the
language/compiler.

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791





Re: HaskellDoc?

2000-03-20 Thread Frank Atanassow

Jan Brosius writes:
  
  
  
   Frank Atanassow [EMAIL PROTECTED] wrote:
   
   
   Anyway, I don't think the choice of markup is all that crucial, but I
   think markup for documenting Haskell should also be as functional and
   elegant as possible.  Is Lout a thing to consider?
   
  
  Yes, I think Lout is the best candidate

No, I didn't write that; Ketil did. As far as Lout goes, if we are going to
pick something exotic, I would prefer a Haskell solution. :)

But just so this message doesn't become a complete waste of bandwidth, let me
include a link that I found today, to Kurt Nørmark's page.

  http://www.cs.auc.dk/~normark/

Take a look at his links to LAML and "elucidative programming", especially the
"Small" and "Large Example" links on the latter page.

Also, there is a mildly interesting (if you can penetrate the flames)
discussion on the merits of XML at Advogato, "Markup Abuse: some comments on
the XML panacea":

  http://advogato.org/article/47.html

Also, let me briefly respond to Ketil:

Ketil Malde writes:
  Frank Atanassow [EMAIL PROTECTED] writes:
 
  I think most of your points against SGML holds for XML as well, am I
  wrong? 

No, I agree they do. I only meant that XML is a lesser evil (if you'll excuse
the loaded terminology).

  [The average user]
 * is likely to be intimidated by the massive infrastructure (programs:
   Jade, DocBook stylesheets, Haskell-specific stylesheets, probably
   also PDFlatex; concepts: SGML, DocBook, DSSSL) that is required to
   handle his literate code; 
  
  To some extent, yes.  But the end user really only needs to understand 
  how to insert the proper tags, and run hs2ps or the like.

He also has to know what the proper tags are, and what their intended
semantics are, and DocBook is quite large. But perhaps that is unavoidable,
and DocBook is becoming more mainstream anyway.

   as if installing GHC wasn't hard enough? :)
  
  What, you mean "apt-get install ghc4" is too hard?

I guess you have never tried installing GHC on a non-Linux platform---although
admittedly the situation is much better than it used to be.

  To conclude, I think it is important to determine what we want with a
  literate documentation system, aiming for too many targets is bound to 
  end in disaster.

Yes, that's actually what I started this discussion for, to get an idea of the
requirements.

  Thinking a bit further from this, I think one of the reasons why Lisp
  (and I suspect Smalltalk) have such nice development environments, is
  that the environment interacts a lot with the compiler or
  interpreter.  I.e. the editor can access data structures more or less
  internal to the compiler.  Or put another way, the program text (in
  particular for Lisp) is treated as data by the compilation/development
  system.  Could this be achieved with Haskell? 

It is easier to do this in LISP and Smalltalk because they are dynamically
typed. You could try for some sort of reflection in Haskell, for example by
starting with the public Haskell parser, but I think it would complicate
things enough that it isn't worth it. I don't think a generic documenting
solution for Haskell will be accepted if we innovate too much.

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791




Re: HaskellDoc?

2000-03-20 Thread Frank Atanassow

Ketil Malde writes:
  Frank Atanassow [EMAIL PROTECTED] writes:
   [a nice development environment] is easier to do this in LISP and
   Smalltalk because they are dynamically typed. You could try for some
   sort of reflection in Haskell, for example by starting with the
   public Haskell parser, but I think it would complicate things enough
   that it isn't worth it. I don't think a generic documenting solution
   for Haskell will be accepted if we innovate too much.
  
  It doesn't sound *too* difficult or esoteric.  Hugs-mode in Emacs does
  a bit of it already, displaying types of functions and such --
  although it seems a bit limited (to the Prelude?).  How hard would it
  be to either get the underlying Hugs, or e.g. a Happy-based parser, to
  snarf type and other information from modules in scope, and also look
  for embedded documentation?

That's certainly possible, provided you keep the embedded documentation
Haskell 98-compliant, i.e., in comments or non-code blocks, not in LISP-like
documentation strings. But:

Ketil Malde writes:
  Thinking a bit further from this, I think one of the reasons why Lisp
  (and I suspect Smalltalk) have such nice development environments, is
  that the environment interacts a lot with the compiler or
  interpreter.  I.e. the editor can access data structures more or less
  internal to the compiler.  Or put another way, the program text (in
  particular for Lisp) is treated as data by the compilation/development
  system.  Could this be achieved with Haskell? 

This requires much more infrastructure. You'd need something on the order of
SML/NJ's "visible compiler", I guess.

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791




Re: HaskellDoc?

2000-03-20 Thread Frank Atanassow

Jonathan King writes:
  1) Haskell code uses white-space as a delimiter by default, presumably
 because it's clean and intuitive.
  
 However: it seems like once a month (or even more often), it gets
 pointed out that the "offsides" rule that Haskell
 compilers/interpreters have to use is, uh, pretty hideous.  And
 a slew of Haskell bugs often boil down to finding a misindented
 "where" clause 200-and-something lines up the file.
  
 Also however: Every idea I've seen so far about (re)doing literate
 programming in Haskell seem to recognize the fact that ending 
 delimiters are a reasonable price to pay for a lot of reasons.  So
 Haskell could end up being a language where you could have a program
 whose documentation is exquisitely and verifiably well-formed despite
 the fact the code itself has been ruined by incorrect indentation.
  
 Am I the only person who finds that really, really weird?

Most people slam the offside rule because it depends on trapping parser
errors, not because they hate the notion of layout. There is a simpler notion
of the layout which doesn't depend on error-trapping and is preferable, at
least in my mind.

As for stray "where" clauses and the like, I think that happens to me maybe
once or twice a year. Once my definitions get longer than one screenful or so,
I know it's time to factor some things out. And the beauty of referential
transparency is that you can usually do it, too. So I have no problems with
layout...

  3) People in the Haskell programming community seem to appreciate
 the benefits of abstraction.  Wow, what a surprise.  Nonetheless,
 I've seen documentation for Haskell projects presented in formats
 that range from plain text to LaTex to HTML to postscript generated
 by a word processor that shall remain nameless...all over the place,
 with the one commonality apparently being that, as far as formats go,
 the statement "You can't get there from here" is true for many if not
 values of "You", "there", and "here".  As far as I can tell, the best
 stab so far seems to be LaTeX, just because you probably could get
 to some other format from there on almost any machine.
  
 However: this is not just a problem for Haskell, *and it's a very
 hard problem if you want it to be*.  There are maybe a thousand
 computer languages out there and fifty thousand systems
 for semi-literate programming.  As far as I can tell, the one that
 is used by the widest variety of people who don't share the same
 lab or office is the "pod" format used by Perl.  Now, nobody is going
 to hold that one out as a model of the ideal anything, but, weirdly
 enough, it works well enough, because it doesn't try to over-reach,
 but *does* provide translators and converters *to* the formats that
 people really need.  Number one being html, number two being text,
 and number three being man.  Everything else seems to be window
 dressing these days.
  
 There's probably a lesson there.  You can stipulate any old format
 you like, but if it won't easily produce HTML (like lout), or
 produces a psychotic approximation to HTML (like W**d), you're hosed.
 Any browser on the planet can dump HTML to text or postscript, and
 no, it won't be artsy, but, gosh, it might just be good enough.

[Was there a #2?]

What's your point? I think we all want to be able to produce HTML... or did I
miss something? I also agree that we should not shoot for too much; but I
think we should agree on what we shoot for, first.

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791




Re: HaskellDoc?

2000-03-19 Thread Frank Atanassow

Volker Wysk writes:
  On Fri, 17 Mar 2000, Philip Wadler wrote:
  
   Volker suggests using SGML/DSSSL for documentation.  If one were to
   take this route, I think XML/XSLT would be a more sensible combination.
  
  Why do you think so? I see the following advantages of SGML/DSSSL over
  XML/XSL:
  
  - open source tools available
  - SGML is much better for ordinary text editors. XML markup is quite
heavy, because no tag minimisation is supported.
  - DTDs such as TEI-Lite and DocBook are for SGML. However, a XML-DTD for
DocBook is being developed.

- Open source tools for XML/XSL are also available, and more of them.

- SGML is more complex than XML, so it is actually easier to handle XML with
  extant text editors. For example, many SGML editing tasks require that the
  editor is aware of the DTD or SGML declaration, whereas XML provides enough
  syntactic hints that you can do without them more often. (For example, tags
  for elements with empty content are syntactically distinct in XML;
  whitespace treatment is explicit in an XML instance; ...) Your point about
  tag minimization is worth noting, though.

- As you say, there is an XML DocBook DTD and stylesheet package in
  development.

On the other hand, SGML has been redefined to be a more-or-less strict
superset of XML now. So you can, for example, use DSSSL tools to format XML as
well.

  SGML can be converted to XML, using a tool like sgmlnorm, I think.
 
Not in general, because the containment relation goes the other direction.

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791




Re: HaskellDoc?

2000-03-18 Thread Frank Atanassow

Volker Wysk writes:
  With SGML, you could achieve all the goals in a systematic manner. You
  would write Haskell-scraps spread over an SGML- instead of a
  Latex-Document. But then, the resulting document *still* is an SGML
  document. You can do all processing a literate programming tool would do
  with a "web" file, directly in SGML/DSSSL. After all, a Haskell programm
  is just another type of text, and LP ist just another sort of text
  processing. SGML is designed to be suitable for (almost?) any conceivable
  way of text processing. And SGML is open, stable, widely used,
  non-proprietary. (And damn complicated for the implementors).

I have five years of experience with both SGML n industry, and I know DocBook
fairly well. I also know DSSSL really, really, really well, so please allow me
to vent some steam and explain why I think using SGML as a literate backend is
not such a great idea.

First, I agree with Phil, that if you are going to use a markup language, you
should be using an XML/XSL solution instead. The reason is simply that SGML is
rapidly being phased out, and XSL is destined to, or maybe already largely
has, replace(d) DSSSL. Even James Clark, who spent five years of his life
developing Jade, the most popular free DSSSL engine, and is mostly responsible
for the design of the DSSSL formatting spec, has given up on DSSSL. (And if
you don't believe me, I can point you to the historic :) message on DSSSList
where he said it.)
 
Second, SGML is hard to read and write. Part of the reason for putting code
and documentation in the same place is accessibility. When you change the
code, it is supposed to be easy to update the documentation. But with SGML it
is not so easy, because the tags are often long and noisy. SGML lacks some
things that we take for granted, for instance polymorphism and local
definitions, which has a tendency to inflate DTDs and lead to
repetitiveness in instances.

Also, SGML is essentially just a way of representing simple abstract syntax
trees, with some special allowances for text. There is a reason we use
concrete syntax for inputting programming languages. It is easier to read and
write, and we only need a single parsing or pretty-printing step to get us
from concrete to abstract syntax, or vice versa. So, in my mind, SGML is more
properly thought of as a target of some parsing transformation, rather than as
a source language. It is true that there are some tools like the free PSGML
for Emacs, or the commercial Adept products, which simplify the process of
inputting SGML directly, but we cannot expect the average Haskell user to have
such recourse.

The same holds for DSSSL. It is true that DSSSL is based on a sort-of
functional language, but 1) it isn't Haskell, 2) there are a lot of nitpicky
details which can make it difficult to use in practice. If you fix the DTD and
stylesheets, say DocBook, that's a different matter, but if you go that far,
what is the point of exposing the fact that you are using DSSSL to the
literate Haskell programmer at all?

I think that, either: a formatting language for literate Haskell should have a
concrete syntax which makes it easy to edit using standard tools, something
lightweight and intuitive like the Wiki web syntax; or, we need to write some
editing tools in Haskell so that they are relatively portable and readily
available and accessible to all Haskell programmers.

The third reason why I don't think SGML and DocBook are so suitable for
literate programming is that Haskell code is already in a machine-readable
format by definition. So we can generate identifier indices, interface
synopses, etc. directly. There is no need to use SGML for this. (I mention
this because half of your argument depended on using SGML tools for generating
such things.) We only need something like SGML to deal with the literate
portion, i.e., the documentation text.

The only advantage (for us) of SGML/DocBook is that by using DocBook (or
rather, by using DocBook + Jade + Norman Walsh's stylesheets) you get access
to several popular output formats like PS, PDF, RTF, HTML, etc. I agree that
that is a big advantage, but then the average user:

  * will wonder why we are not leveraging the power of Haskell at all
("If Haskell is such a wonderful language, why do I need all this
other crud?")
  * must learn DocBook + whatever extensions we add to the DTD
  * is likely to be intimidated by the massive infrastructure (programs:
Jade, DocBook stylesheets, Haskell-specific stylesheets, probably
also PDFlatex; concepts: SGML, DocBook, DSSSL) that is required to
handle his literate code; as if installing GHC wasn't hard enough? :)

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791




Re: ghc-4.06-1.src.rpm (was: ghc to be dropped from potato (debian)

2000-03-15 Thread Frank Atanassow

Peter Hancock writes:
  After a _lot_ of ferreting round the net, I found db2dvi in
  stylesheets-0.10-2.i386.rpm.  (Actually, it's not in
  docbook-3.1-5.i386.rpm, or psgml-1.2.1-1.i386.rpm, or
  sgml-tools-1.0.9-5.i386.rpm, or jade-1.2.1-9.i386.rpm, or ...)  The
  adjective `exotic' seems apt.  (By the way, the docbook web page
  says that the project has been suspended.)  I suppose the problem here
  is that the ghc people (laudably, sensibly, etc, ..) want a doc package
  that makes rtf as well as the usual unix doc formats.

If db2dvi is so hard to find on a typical installation, you (the GHC docs
person, Reuben, I think?) might just want to duplicate its functionality. I
am pretty sure that all db2dvi (and db2{ps,rtf,..}) is is just glue that finds
the DocBook DTD, stylesheets, entities and other stuff and then just calls
jade with the right output type option to do the actual formatting, so it's
really a one-liner if you know the locations of the files (OK, big "if" on a
Unix system...).

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University
Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
Tel +31 (030) 253-1012, Fax +31 (030) 251-3791




HaskellDoc?

2000-03-14 Thread Frank Atanassow

Hi all,

I have seen many systems used backends for the literate part of a literate
Haskell source file. There is the old literate system from GHC (now dead?),
straight HTML, straight TeXinfo, straight LaTeX, {Wiki,Smug,No,Funnel,...}web
and many personal LaTeX style files or programs which usually end up
converting to LaTeX and/or HTML and sometimes do some pretty-printing of the
code.

Now, I agree that (the freedom of, and potential for :) diversity is a good
thing, and it is nice that Haskell's literate syntax is flexible enough to
support all these things and more, but nevertheless it would be nice if there
were some de facto standard, something well-suited to documenting Haskell
programs, and which at least provides a straightfoward way of producing LaTeX
and HTML (or XHTML, or XML when we get there...). Something that we could put
up on haskell.org and point to in a pinch. Preferably something written in
Haskell that the Haskell community could maintain itself.

Are there any candidates? Any thoughts on this? Am I the only one who is a
little annoyed at having to consider what literate format to use each time I
start an .lhs file in a new project? I know that different people will have
different needs (people who need to embed formulae in their docs, or want to
publish the source file as an article, say, will still want to use straight
LaTeX), but I still think it would be useful to have a default choice.

It seems to me that whatever system ends up getting used for the
"Documentation of standard Haskell libraries" (see the Haskell wish list)

  
http://www.pms.informatik.uni-muenchen.de/forschung/haskell-wish-list/items.php3?id=13

might be a good starting point, since that would provide a fairly good acid
test (although I realize there is actually no need for the Haskell libraries
to be literate themselves).

What do you all think?

--fa(c)





Re: HaskellDoc?

2000-03-14 Thread Frank Atanassow

George Russell writes:
  "D. Tweed" wrote:
   Documentation is a vague term: certainly it'd be undesirable for a
   specification to the libraries to just a literate copy of the code
   itself. But if you're thinking in terms of an open source project where
   people fix bugs in the libraries then libraries that contain some
   explanation would be useful (as an additional item to the library spec).
  Couldn't agree more.  Every module should be commented, and every exported
  symbol (and lots of the internal ones) should also be commented.  But I don't
  think you need anything more than comment characters for this.

George made a good point which I think deserves to be made explicit:
client-level documentation, i.e., documentation of the specification, and
developer-level documention, i.e., documentation of the implementation (and
possibly including the specification), have distinct requirements. I think
literate programming is most useful for documenting the implementation, the
idea being that you are more likely to track the actual state of the source if
you keep the documentation in the source itself.  And, from this perspective,
my remark about using the documentation system for the Haskell
libraries---which should clearly be client-level---is wrong-headed.

As far as literate programming goes, then, we could restrict the discussion to
developer-level documentation. However, I personally would still like a de
facto system for describing specifications of libraries and APIs... Also,
although you can well argue that documentation for the specification does not
belong inside the source, the formatting system (or whatever) for client- and
developer-level documentations can profitably be shared, or one can subsume
the other.

--fa(c)