Re: static evaluation of dynamics thing

2000-05-18 Thread Marcin 'Qrczak' Kowalczyk

Wed, 17 May 2000 13:42:22 +0400 (MSD), S.D.Mechveliani [EMAIL PROTECTED] pisze:

 The "human" compilation detects that  h  and  h'  yield  (error..)  
 for each argument  ys.
 In many cases, the programmer would like the compiler to stop 
 compiling N and report something like 
 Error:  h  always yields (error...).

It can be a warning, but without introducing some new syntax it cannot
be an error. When authors of GHC's Prelude wrote:

errorEmptyList:: String - a
errorEmptyList fun =
  error (prel_list_str ++ fun ++ ": empty list")

they knew that it will always yield error, this was exactly what they
wanted, and this is a correct program.

Moreover, IMHO it would be bad to _require_ from implementations
to be able to unfold any expression to a given level, i.e. have an
interpreter inside each compiler. And the notion of level is not
fixed. How many levels it takes to unfold an application of sum?

So an ideal place for these compile-time assertions is optional
warnings, produced without special annotations from the programmer.
A compiler already has freedom to evaluate constant operations at
compile-time and to give warnings for correct code that looks as
programmer errors - many already do. A good compiler can heuristically
spot applications of functions that not always yield bottom to values
for which they always do yield bottom.

-- 
 __("Marcin Kowalczyk * [EMAIL PROTECTED] http://qrczak.ids.net.pl/
 \__/  GCS/M d- s+:-- a23 C+++$ UL++$ P+++ L++$ E-
  ^^  W++ N+++ o? K? w(---) O? M- V? PS-- PE++ Y? PGP+ t
QRCZAK  5? X- R tv-- b+++ DI D- G+ e h! r--%++ y-





Re: Block simulation / audio processing

2000-05-18 Thread Johannes Waldmann



  Has anyone built any block simulators (for modeling continuous electronic
  systems, like OP Amps, RC networks, etc) in Haskell? 

I'm also interested in this. I am thinking of extending 
Paul Hudak's Haskore system to generate and handle true audio data
(instead of, or in addition to) MIDI data.

I don't think I'll have enough time to do the programming myself,
but since I'll be using Hudak's book in next term's course,
I hope I can attract some students, and set them in the right direction.

In fact one student who read the course announcement
(and the book's web page) already asked me 
about functional audio signal processing.

Any pointers appreciated,
-- 
-- Johannes Waldmann  http://www.informatik.uni-leipzig.de/~joe/ --
-- [EMAIL PROTECTED] -- phone/fax (+49) 341 9732 204/209 --





Re: Block simulation / audio processing

2000-05-18 Thread Jerzy Karczmarczuk

Johannes Waldmann :

   Has anyone built any block simulators (for modeling continuous electronic
   systems, like OP Amps, RC networks, etc) in Haskell?
 
 I'm also interested in this. I am thinking of extending
 Paul Hudak's Haskore system to generate and handle true audio data
 (instead of, or in addition to) MIDI data.


 In fact one student who read the course announcement
 (and the book's web page) already asked me
 about functional audio signal processing.
 
 Any pointers appreciated,

There are two distinct problems/areas here.

1. Block simulators, dataflow interfacing etc...
   People mentiond FRAM, but somehow I missed (improbable
   that nobody fired the *obvious* keyword here): HAWK!!!
   See the Haskell Home page, you find all about.

2. DSP, audio streams, etc.
   This is another story, although DSP in a dataflow style is
   something full of sex-appeal (at least for me, 
   an old physicist...).

   Frankly not too much about the functional approach to DSP on
   the Web. I can give you some dozens of pointers to tutorials,
   algorithm description, etc., since I am interested (at least
   conceptually) myself. Lazy algorithms for the filter design,
   for mad recursive special effects (flanging, reverb), for the
   spectral synthesis, pitch shifting - all this is nice, 
   elegant, fascinating, clever...

   ...and horribly inefficient ...

   Do you realize the amount of data processed in order to generate
   10 seconds of audio stream at 96kHz of the sampling frequency?

   First, real-time generation might have severe problems with the
   garbage-collection. Generating all this off-line is OK.
   (BTW. I remember that Paul Hudak thought about generating CSound
   streams from Haskore, but I lost tracks of it...)

   Generating true audio data might be quite heavy. Frankly, I think
   that perhaps one should begin with something intermediate between
   MIDI and real audio streams, we could for example make a functional
   tracker which combines (and transforms) pre-formed audio samples.

Thank you for the inspiration. If I had time enough...

Jerzy Karczmarczuk
Caen, France




Re: more detailed explanation about forall in Haskell

2000-05-18 Thread Jan Brosius

Thanks for your comments.
They are to the point.

But the first email arose from the fact that someone else claimed that
the forall quantifier was used in the same way as in (say "classical")
logic.

I still claim  that everything could be put in a classical logic framework,
which is then another framework than the  one proposed in Haskell

Very friendly
Jan Brosius


- Original Message -
From: Lars Lundgren [EMAIL PROTECTED]
To: Jan Brosius [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Wednesday, May 17, 2000 10:56 AM
Subject: Re: more detailed explanation about forall in Haskell


 On Tue, 16 May 2000, Jan Brosius wrote:

 
  - Original Message -
  From: Lars Lundgren [EMAIL PROTECTED]
  
  Sent: Tuesday, May 16, 2000 1:54 PM
  Subject: Re: more detailed explanation about forall in Haskell
 
 
   On Tue, 16 May 2000, Jan Brosius wrote:
  
Ok I understand this isomorphism better. However this remark seems
to be
  of
no value to functional programmers.
Why trying to mix terms( otr types) with relations ?
   
  
   What is a 'type' in your oppinion?
 
  I look at it as a set (either a variable set or a specific set). E.g. I
look
  at Bool as a specific set which itself contains
  values  True , False that are not sets. Next I interpret   something
like f
  :: a - Int  as a family (indexed by a) of functions of   " set"  a into
set
  Int.

 So in other words, f will map any value - given that it is an element of
 the right set (i.e given that it satisfies the precondition) to a value
 that is an element of Int - sounds like a postcondition to me.


  I didn't come into problems by this interpretation after having read the
  Haskell's User Guide. Which is my
  only general source about Haskell. It is up to someone else to say if
such
  an interpretation shall lead into misinterpretation.
  I think Haskell will not be attractive to mathematicians if types MUST
be
  read as formula's .

 But of course, that is not the case, but it might help if you want to
 understand the rationale behind the choice of name for the type quantifier
 "forall".

  If that is the case
  I can only say that I find the term "functional programming"  a bit
strange.
 
  
   Isn't a type a statement about pre- and post-conditions, i.e. a
formula?
 
  I can't answer this since I have never read the definition of a type in
say
  typed lambda calculus.

 I wasn't after a definition, I just tried to explain that it is quite
 natural to view types as formulas, and that it is not in conflict with
 other views. (and it is not restricted to type systems for fp languages)

  I have a book about logic that deals about plain lambda calculus and
that
  deals about "restricted" typed lambda calculus (where a type is not
  considered as a formula). I never read the definition of Hindley-Milner
  types.

 If you read the typing rules, remember to have a reference of the natural
 deduction rules with you...

  And the only introduction about types were the general user's guides
  from Clean , SML , Ocaml and of Haskell.
  

 If you are interested in systems that exploit the isomorphism further you
 can take a look at :

 cayenne, agda, alfa from:

 http://www.cs.chalmers.se/Cs/Research/Logic/implementation.mhtml

 /Lars L











Re: more detailed explanation about forall in Haskell

2000-05-18 Thread Jan Brosius


From: Frank Atanassow [EMAIL PROTECTED]
To: Jan Brosius [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Wednesday, May 17, 2000 1:35 PM
Subject: Fw: more detailed explanation about forall in Haskell


 Jan Brosius writes:
Why do some computer scientists have such problems with the good
logical
forall
and exist.  Remember that good old logic came first.
On it was build SET theory.
On it was built topological space
   
To prove some theorem in lambda calculus one used a topological
model.
   
You see : good old logic came FIRST  afterwards came theorems of
typed
lambda calculus .
This is not the sophistic question : what came first : the egg or the
chicken.
   
NO good old logic came first .

 Your argument is absurd and irrelevant.

  SORRY,  this is quite TRUE , in fact  [forall x. alpha(x)]  = alpha(x)

Here, I have to say that the above equivalence is false . SHAME on me.

I must put this in the good way;

[forall x . alpha(x)] = alpha(x)   is   True

If alpha(x) is TRUE then the following is true : alpha(x) = [forall x.
alpha(x)]

So if alpha(x) is TRUE then  [forall x. alpha(x) ]= alpha(x)

Sorry for the error
 
  the above true equivalence seems to be easily considered as wrong . Why?
  Because alpha(x)  is TRUE can be read as  alpha(x) is TRUE for ANY x.

 I think this is one of the roots of your misconceptions.

These two
 propositions are certainly not equivalent. Maybe it is because you have
been
 told that, in Haskell, syntax we typically elide the "forall" quantifier.

 The sentence "alpha(x)" asserts that there is a _distinguished_ element
named

NO , saying that there is a distinguished element T that satisfies alpha
implies

that   exists x. alpha(x) is true

So the following implication is true

(T | x) alpha(x) = exists x. alpha(x)

that is , if alpha(T) is true then  [exists x. alpha(x)] is true

more precisely , let c be a constant , and if alpha(c) is true then

you can say that  exists x. alpha(x) is true.

 "x" in the domain of your model, and that that element, at least,
satisfies

x is a variable ; no domain ; no model

 "alpha". The sentence "forall x. alpha(x)", OTOH, asserts that _any_
element
 in your domain satisifies "alpha". So if your domain is a set D, then a
model
 of "alpha(x)" needs a subset C of D to interpret "alpha", along with a
member
 X of C to interpret "x", and the sentence is true iff X is in C. OTOH, a
model
 of "forall x. alpha(x)" needs only the subset C, and is true iff C=D,
since it
 asserts that for any element Y of D, Y is in C.

 In Haskell the situation is complicated by the fact that there are no
 "set-theoretic" models (are you even aware that Haskell's type system is
 unsound?), and the fact that the domain is multi-sorted. But those facts
do
 not bear on the distinction between the two terms on either side of the
 equivalence above.

Very friendly
Jan Brosius

 --
 Frank Atanassow, Dept. of Computer Science, Utrecht University
 Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
 Tel +31 (030) 253-1012, Fax +31 (030) 251-3791









Re: more detailed explanation about forall in Haskell

2000-05-18 Thread Jan Brosius

Sorry, if in some way I have upset you
Sincerely
Jan Brosius

From: Frank Atanassow [EMAIL PROTECTED]
To: Frank Atanassow [EMAIL PROTECTED]
Sent: Wednesday, May 17, 2000 1:50 PM
Subject: Fw: more detailed explanation about forall in Haskell


 Frank Atanassow writes:
   Jan Brosius writes:
  Why do some computer scientists have such problems with the good
logical
  forall
  and exist.  Remember that good old logic came first.
  On it was build SET theory.
  On it was built topological space
 
  To prove some theorem in lambda calculus one used a topological
model.
 
  You see : good old logic came FIRST  afterwards came theorems of
typed
  lambda calculus .
  This is not the sophistic question : what came first : the egg or
the
  chicken.
 
  NO good old logic came first .
  
   Your argument is absurd and irrelevant.

 I take it back. There is no argument here, only the childish insinuation
that
 "my daddy can beat up your daddy".

 So there. blt!

 --
 Frank Atanassow, Dept. of Computer Science, Utrecht University
 Padualaan 14, PO Box 80.089, 3508 TB Utrecht, Netherlands
 Tel +31 (030) 253-1012, Fax +31 (030) 251-3791








Fw: Fw: more detailed explanation about forall in Haskell

2000-05-18 Thread Jan Brosius


- Original Message -
From: Jan Brosius [EMAIL PROTECTED]
To: Carl R. Witty [EMAIL PROTECTED]
Sent: Thursday, May 18, 2000 12:06 PM
Subject: Re: Fw: more detailed explanation about forall in Haskell


 Thanks Carl for letting me see an ugly error that I made . SHAME on me.

  the equivalence  [forall x. alpha(x)] =  alpha(x)  is ONLY TRUE if
 alpha(x) is a TRUE formula.

 [forall x. alpha(x)] = alpha(x)  is TRUE



 Let's see what happens when alpha(x) is FALSE for not all x and if one
would
 accept that:alpha(x) = [forall x. alpha(x)] is TRUE.

 Since alpha(x) is FALSE there might still be a constant c such that
alpha(c)
 is TRUE

 Then (Modus Ponens)

 alpha(c) = [forall x. alpha(x)]

 alpha(c)

 *

 forall x . alpha(x)  is TRUE

 which is of course false

 this shows that the implication  alpha(x) = [forall x. alpha(x)]  is in
 general false (it is only true if alpha(x) is a true formula.

 Very friendly
 Jan Brosius


 .From: Carl R. Witty [EMAIL PROTECTED]
 To: Jan Brosius [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]
 Sent: Wednesday, May 17, 2000 4:24 AM
 Subject: Re: Fw: more detailed explanation about forall in Haskell


  "Jan Brosius" [EMAIL PROTECTED] writes:
 
SORRY,  this is quite TRUE , in fact  [forall x. alpha(x)]  =
 alpha(x)
   
the above true equivalence seems to be easily considered as wrong .
 Why?
Because alpha(x)  is TRUE can be read as  alpha(x) is TRUE for ANY
x.
   
(Is there something wrong with the education of a computer
scientist?)
 
  Jan, could you tell us whether you think the following statements are
  true or false?
 
  Let prime(x) mean "x is a prime number".
 
  1. [forall x. prime(x)] = prime(x)
  2. forall x.([forall x. prime(x)] = prime(x))
  3. forall y.([forall x. prime(x)] = prime(y))
  4. [forall x. prime(x)] = prime(y)
  5. [forall x. prime(x)] = prime(2)
  6. [forall x. prime(x)] = prime(2)
  7. prime(2) = [forall x. prime(x)]
  8. prime(2)
  9. [forall x. prime(x)] = prime(4)
  10. [forall x. prime(x)] = prime(4)
  11. prime(4) = [forall x. prime(x)]
  12. prime(4)
  13. forall x. prime(x)
  14. prime(x)
  15. Statement 1 above means the same thing as statement 2 above.
  16. Statement 2 above means the same thing as statement 3 above.
  17. Statement 3 above means the same thing as statement 4 above.
  18. Statement 5 above is a substitution instance of statement 3;
  thus, if statement 3 were true, statement 5 would be true.
  19. Statement 13 above means the same thing as statement 14 above.
 
  If we follow the convention that free variables are to be considered
  implicitly universally quantified, my vote is that statements 6, 8, 9,
  10, 11, 15, 16, 17, 18, and 19 are true; the rest are false.
 
  Carl Witty
 






Re: Block simulation / audio processing

2000-05-18 Thread Koen Claessen

Mike Jones asked:

 | Has anyone built any block simulators (for modeling
 | continuous electronic systems, like OP Amps, RC
 | networks, etc) in Haskell?

Johannes Waldmann added:

 | I'm also interested in this. I am thinking of
 | extending Paul Hudak's Haskore system to generate and
 | handle true audio data (instead of, or in addition to)
 | MIDI data.

Jerzy Karczmarczuk answered:

 | HAWK!!! See the Haskell Home page, you find all about.
 : 
 | DSP, audio streams, etc. Do you realize the amount of
 | data processed in order to generate 10 seconds of
 | audio stream at 96kHz of the sampling frequency?

I did not reply with *my* abvious answer: LAVA!! :-) This is
because I thought the original question was about
*continuous* systems, and Lava (and Hawk) are about
discrete/digital systems.

But if you find that the Hawk way is interesting to do these
kind of things, take a look at Lava as well. Lava has
recently gotten a major rewrite (no monads left!), and at
the moment we are evalutating the new version in a course.
One can take a preview in the Lava tutorial, available on:

  http://www.cs.chalmers.se/Cs/Grundutb/Kurser/svh/

The main difference between Hawk and Lava is the
following.

Hawk programs describe sequential systems mainly targeting
simulation (running these in Haskell). This means one can
use any Haskell datatype and function in the definition of a
system. Very powerful and expressive!

Lava programs can be simulated, but also unfolded to yield a
description of the system in a different, lower-level,
language. Examples of these languages are VHDL, 
C, EDIF, state machine notation, temporal propositional
logic, etc.

This means that the programmer is limited to use datatypes
that can be expressed in terms of basic types supported by
the target language (usually booleans and integers or
floats), and to use functions that can be expressed in terms
of basic operations on these types.

When describing a specific domain, for example gate-level
hardware, this does not seem to be too much of a problem.

It would be interesting to see if one can use the Lava
approach to describe a system that performs, say, an
algorithm on an audio stream. The algorithm would be
expressed in terms of basic operations on audio streams. 
In the end one could generate C (or so) from this
description.

If anyone is interested in doing such a thing, I would be
happy to send a preliminary version of Lava, and to
explain how it is implemented and how to modify it to
deal with other domains than digital hardware.

Regards,
Koen.

--
Koen Claessen http://www.cs.chalmers.se/~koen 
phone:+46-31-772 5424  e-mail:[EMAIL PROTECTED]
-
Chalmers University of Technology, Gothenburg, Sweden







Lava (was Re: Block simulation / audio processing)

2000-05-18 Thread Rob MacAulay

Koen Claessen wrote:

 I did not reply with *my* abvious answer: LAVA!! :-) This is
 because I thought the original question was about
 *continuous* systems, and Lava (and Hawk) are about
 discrete/digital systems.
 
 But if you find that the Hawk way is interesting to do these
 kind of things, take a look at Lava as well. Lava has
 recently gotten a major rewrite (no monads left!), and at
 the moment we are evalutating the new version in a course.
 One can take a preview in the Lava tutorial, available on:
 
   http://www.cs.chalmers.se/Cs/Grundutb/Kurser/svh/
 

Great! Does this mean that at last you will release Lava? 
I found Lava very interesting, but could not re-create it completely 
from your published papers. And so far you have not made the 
source code available..

Regards,

Rob MacAulay
Rob MacAulay
Cambridge




Re: Type of minimumBy

2000-05-18 Thread Keith Wansbrough

 OTOH, if we were to redefine all the xxxBy functions that involve
 comparison, I'd vote for ((=) :: a-a-Bool) over (compare ::
 a-a-Ordering) as the comparison function since (=) is often easier to
 create a quick definition for.  I wouldn't consider such a change until
 Haskell 2, though.

I disagree... I don't think we should be making `quick-and-dirty' 
definitions easy, I think we should be doing it the Right Way.  It 
takes two `=' comparisons to get the information obtainable from one 
`compare', but one `compare' is also enough to give a result for `='.  
It usually requires no more computation to give the more specific 
result.

If you really want quick-and-dirty, you could add:

le2ord :: (a - a - Bool) - (a - a - Ordering)
le2ord le a b = case (a `le` b, b `le` a) of
  (True, False) - LT
  (True, True ) - EQ
  (False,True ) - GT

to the prelude (or to an Ordering library).  While you're constructing 
an Ordering library, you could add to it:

isLE :: Ordering - Bool
isLE LT = True
isLE EQ = True
isLE GT = False

thenCmp :: Ordering - Ordering - Ordering
EQ `thenCmp` o2 = o2
o1 `thenCmp` _  = o1

and a partial ordering class

type POrdering = Maybe Ordering

class POrd a where
  pcompare :: a - a - POrdering

instance Ord a = POrd a where
  pcompare a b = Just (compare a b)

Just my £0.02 (about US$0.04 I believe).

--KW 8-)
-- 
: Keith Wansbrough, MSc, BSc(Hons) (Auckland) ---:
: PhD Student, Computer Laboratory, University of Cambridge, UK. :
: Native of Antipodean Auckland, New Zealand: 174d47'E, 36d55'S. :
: http://www.cl.cam.ac.uk/users/kw217/ mailto:[EMAIL PROTECTED] :
::






Re: more detailed explanation about forall in Haskell

2000-05-18 Thread Frank Atanassow

Jan Brosius writes:
  I must put this in the good way;
  
  [forall x . alpha(x)] = alpha(x) is True

Yes, by instantiation.

  If alpha(x) is TRUE then the following is true : alpha(x) = [forall x.
  alpha(x)]

No, alpha(x) only asserts that some element named x satisfies alpha. It does
not follow that therefore every other element also satisfies alpha.

  So if alpha(x) is TRUE then [forall x. alpha(x) ]= alpha(x)

This does not follow. It is truth-equivalent to:

  [forall x. alpha(x)] = True

which is equivalent to:

  forall x. alpha(x)

which is only true when every element satisifies alpha; so it is not a
tautology.

I repeat: you do not understand the difference between bound and free
variables. A free variable behaves like a constant. It is not "implicitly"
quantified in any way, because it denotes a specific element. The only
difference between a constant and a free variable is syntactic: a free
variable can be bound in an outer scope, while a constant cannot.

   The sentence "alpha(x)" asserts that there is a _distinguished_ element
   named
  
  NO , saying that there is a distinguished element T that satisfies alpha
  implies
  
  that exists x. alpha(x) is true

Yes, it also implies this fact, because it can be derived by extensionality:

  alpha(x) = exists y. alpha(y)

However, existential quantification hides the identity of the element in
question. The fact that we know _which_ element it is that satisifies alpha
permits us to say more. This is why:

  exists x. alpha(x),
  alpha(x),

and

  forall x. alpha(x)

are not truth-equivalent. Rather we have:

  forall y. alpha(y) = alpha(x)   and   alpha (x) = exists z. alpha(z)

   "x" in the domain of your model, and that that element, at least,
  satisfies
  
  x is a variable ; no domain ; no model

A model must assign an element in the domain to each free variable. You need a
model to determine the truth-value of a first-order proposition which is not
tautological. We are trying to establish the truth-value of a proposition with
a free variable; therefore we need a model. A model needs a domain of elements
to draw from. Therefore we need a domain. OK?

-- 
Frank Atanassow, Dept. of Computer Science, Utrecht University Padualaan 14,
PO Box 80.089, 3508 TB Utrecht, Netherlands Tel +31 (030) 253-1012, Fax +31
(030) 251-3791





RE: Block simulation / audio processing

2000-05-18 Thread Peter Douglass

Koen Claessen wrote:

 But if you find that the Hawk way is interesting to do these
 kind of things, take a look at Lava as well. Lava has
 recently gotten a major rewrite (no monads left!), ...

I'm interested to know the rationale behind removing the monads.  My
admittedly small experience with Haskell has led me to avoid monads when the
same can be achieved with "pure" functions, and for much of the same reasons
that I find imperative programming ugly.  But other people seem to love
them!
--PeterD 




RE: Block simulation / audio processing

2000-05-18 Thread Mike Jones

Jerzy,

1. Block simulators, dataflow interfacing etc...
   People mentiond FRAM, but somehow I missed (improbable
   that nobody fired the *obvious* keyword here): HAWK!!!
   See the Haskell Home page, you find all about.

This is exactly what I have been looking at. My be problem is how to
dynamically control the step size of algorithms to support algorithms that
have non-uniform step size. Perhaps some kind of clock divider scheme.

Mike





RE: Block simulation / audio processing

2000-05-18 Thread Koen Claessen

Peter Douglass wrote:

 | [Lava] I'm interested to know the rationale behind
 | removing the monads.

The reason we removed the monads was that circuits with
feedback (loops) in them became very tedious to define. One
had to use monadic fixpoint operators (or "softer" variants
on them), which were really unnatural to use. Also, the
monadic style enforces an ordering on the components that
you are using, while in a circuit, there is no such ordering
(everything works in parallel).

The way we removed the monads was by making a little
extension to Haskell, called Observable Sharing, which
allows one to detect if two branches in a tree are really
the same branch or merely copies of each other. Since this
is not possible to do in Haskell, this extension technically
defines a new language (not Haskell). See our paper for more
details [1]. The extension allows one to detect loops and
shared component in a datastructure.

To our delight, Lava circuit descriptions now look very much
like Hawk circuit descriptions. One of our goals was to make
the two systems come closer together.

Unfortunately, when we were struggling taking the monads out
of Lava, the Hawk people seem to have it gotten into their
mind to put in monads! :-)

Their solution to the feedback problem is to extend the
do-notation to introduce monadic fixpoint combinators
automatically (just as it introduces = at the moment).
This idea has been around for a long time, but I have not
seen or come up with a satisfactory solution to it. I am
looking forward to seeing their solution!

A big advantage of having monads is of course that you can
put a lot of extra information in them about the used
components, such as layout information. Currently we are
developing new ways to do this without monads.

Regards,
Koen.

[1] Koen Claessen, David Sands, "Observable Sharing for
Functional Circuit Description", ASIAN '99, Phuket,
Thailand, 1999.
http://www.cs.chalmers.se/~koen/Papers/obs-shar.ps

--
Koen Claessen http://www.cs.chalmers.se/~koen 
phone:+46-31-772 5424  e-mail:[EMAIL PROTECTED]
-
Chalmers University of Technology, Gothenburg, Sweden





HOOTS Call For Papers - Deadline June 22

2000-05-18 Thread Alan Jeffrey

Dear all,

A reminder that the deadline for papers for the HOOTS workshop is coming 
up, on June 22 2000.  Only a month to go!

Alan.

-- 
Alan Jeffrey   http://fpl.cs.depaul.edu/ajeffrey/
CTI, DePaul University, 243 S. Wabash Ave, Chicago 60604, USA

--

  HOOTS 2000

 Call For Papers for
 The Fourth International Workshop on
   Higher Order Operational Techniques in Semantics

   A satellite workshop of PLI 2000
  Montreal, September 21-22

HOOTS 2000 home page: http://hoots.cs.depaul.edu/
HOOTS home page:  http://www.cl.cam.ac.uk/users/amp12/hoots
PLI 2000 home page:   http://www.cs.yorku.ca/pli-00/

SCOPE The fourth workshop on Higher Order Operational Techniques in
Semantics (HOOTS 2000) will address fundamental principles and
important innovations in the definition, analysis, and application of
operational semantics for higher order languages and calculi.
Techniques addressed in the HOOTS series include operational
equivalences, type systems, program logics and relationships with
other forms of semantics. Application areas include the specification
and implementation of programming languages, security, and
mobility. Languages discussed include both high-level and low-level
languages, and a variety of calculi, including calculi of functions,
objects, and processes.

Deadline for submission:  June 22, 2000
Notification of acceptance:   July 29, 2000
Final version due:  August 26, 2000
HOOTS 2000, Montreal: September 21-22, 2000

PROGRAMME COMMITTEE Andrew Gordon, Microsoft Research; Robert Harper,
Carnegie Mellon University; Alan Jeffrey, DePaul University (Chair);
Andrew Pitts, Cambridge University; Julian Rathke, Sussex University;
David Sands, Chalmers University; Davide Sangiorgi, INRIA Sophia
Antipolis; Carolyn Talcott, Stanford University.

PREVIOUS MEETINGS The first HOOTS workshop was organised by Andrew
Gordon and Andrew Pitts on October 28-30, 1995 as one of the events
within the 6-month research programme on Semantics of Computation at
the Isaac Newton Institute for Mathematical Sciences, University of
Cambridge, UK. A book based on presentations at the workshop appeared
in the Publications of the Newton Institute series published by
Cambridge University Press.  The second HOOTS workshop was organised
by Andrew Gordon, Andrew Pitts, and Carolyn Talcott on December 8-11,
1997 at the Center for the Study of Language and Information, Stanford
University, USA. The third HOOTS workshop was organized by Andrew
Gordon and Andrew Pitts on September 30 and October 1, 1999 in Paris,
France, as part of the Principles, Logics, and Implementations of
high-level programming languages conference. Elsevier published an
electronic proceedings of the second workshop as Volume 10 of
Electronic Notes in Theoretical Computer Science, and the proceedings
of the third workshop as Volume 26.

SUBMISSION  PUBLICATION PostScript submissions of up to 12 pages
should be sent by email to Alan Jeffrey [EMAIL PROTECTED] by June
22, 2000.  Simultaneous submission to other conferences or journals is
not allowed.  Papers should be formatted for USLetter or A4 paper.
Elsevier will publish the proceedings as a volume of Electronic Notes
in Theoretical Computer Science.  Hardcopies will be distributed at
the workshop.




Re: more detailed explanation about forall in Haskell

2000-05-18 Thread Jan Brosius


From: Frank Atanassow [EMAIL PROTECTED]
To: Jan Brosius [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Thursday, May 18, 2000 2:53 PM
Subject: Re: more detailed explanation about forall in Haskell


 Jan Brosius writes:
   I must put this in the good way;
  
   [forall x . alpha(x)] = alpha(x) is True

 Yes, by instantiation.

I disagree.


   If alpha(x) is TRUE then the following is true : alpha(x) = [forall x.
   alpha(x)]

 No, alpha(x) only asserts that some element named x satisfies alpha. It
does
 not follow that therefore every other element also satisfies alpha.

No the variable x means that it can be substituted by any term. That is the
real meaning of a variable
(at least in classical logic).

Consider e.g. Carl Witty's formula  : prime(x) == x is a prime number.
Then  prime(2) == (2|x) prime(x) is true , and prime (4) is false

Take something else alpha(x) == prime(x) AND ¬ prime(x)   ( ¬  signifies
not)

then alpha(x) is a false formula  and you have  ¬ [exists x. alpha(x)]

On the contrary   beta(x) == ¬ alpha(x) is true for any x


   So if alpha(x) is TRUE then [forall x. alpha(x) ]= alpha(x)

 This does not follow. It is truth-equivalent to:

Sorry, I disagree


   [forall x. alpha(x)] = True

( In the above you should at the very least not forget to mention that
[forall x. alpha(x)] must be true )

In principle   True is a sloppy expression for saying that [forall x .
alpha(x)] is a theorem of some mathematical theory. I have used True because
discussing mathematical theories would have lead the discussion
out of the scope of the discussion.

For instance let "T="  denote a mathematical theory with equality

then one has   "T="  -|  x = x

that is  the formula  x = x  is a theorem in "T=" . More sloppy the formula
x=xis  TRUE in   "T="



 which is equivalent to:

   forall x. alpha(x)

 which is only true when every element satisifies alpha; so it is not a
 tautology.

 I repeat: you do not understand the difference between bound and free
variables.

I disagree  a free variable is not a constant.

A bound variable is a variable tied to a quantifier. In principle a bound
variable is not visibly.
This is best illustrated by  Bourbaki's approach where they first define :

exists x . alpha(x) == (`t x (alpha)| x) alpha

where if e.g.  alpha(x) == x = x then

`t x (alpha) == `t ( `sq = `sq )  where `sq denotes a small square  and
where here both `sq are tied to `t by a line
 In this way I can really say that in   [existx x. x = x] the variable x has
disappeared.

Finally Bourbaki defines   forall x . alpha(x) == ¬ [exists x . ¬ alpha(x)]

I repeat : there are no domains in propositional calculus , but I am aware
that in some text books about
propositional calculus one speaks about it , I regard this as cheating :
introducing a concept that one is   going to define later. I remember my
prof, in logic in 1 cand. math - physics where his first lesson began by
writing
on the blackboard  :  p V ¬ p  and he only said  " p or not p "  " tertium
non datur"  " a third possibility is not given "
His whole course was as formally as possible, and variables were not
supposed to range about a domain.

 A free variable behaves like a constant.

No a constant is always the same constant, it is never substituted by
something else, it is "immutable".

A variable is a placeholder for a Term ( a constant is a special term)

It is not "implicitly"
 quantified in any way, because it denotes a specific element. The only
 difference between a constant and a free variable is syntactic: a free
 variable can be bound in an outer scope,

No the real role of a variable   say  x in a formula alpha(x) or a term T(x)
is a PLACEHOLDER ready to be substituted by a Term (e.g. constant or a
parameterized term)

And that's the reason why I should find it weird if  In Haskell 98+ one
would distinguish  between

a - a   and [forall a. a - a]

while a constant cannot.

The sentence "alpha(x)" asserts that there is a _distinguished_
element
named
  
   NO , saying that there is a distinguished element T that satisfies
alpha
   implies
  
   that exists x. alpha(x) is true

 Yes, it also implies this fact, because it can be derived by
extensionality:

   alpha(x) = exists y. alpha(y)

 However, existential quantification hides the identity of the element in
 question. The fact that we know _which_ element it is that satisifies
alpha
 permits us to say more. This is why:

   exists x. alpha(x),
   alpha(x),

 and

   forall x. alpha(x)

 are not truth-equivalent. Rather we have:

   forall y. alpha(y) = alpha(x)   and   alpha (x) = exists z. alpha(z)

AND ifalpha(x)  is a TRUE formula then   alpha(x) = forall x. alpha(x)

A better way to express this is

Rule of genaralization:

alpha(x )  is True (and x is a variable)

forall x . alpha(x ) is true

I repeat that my use of true or false is not really the way it is expressed
in Bourbaki , I used it
in order to be understood by everybody (and in 

Re: Type of minimumBy

2000-05-18 Thread Matt Harden

Keith Wansbrough wrote:

  OTOH, if we were to redefine all the xxxBy functions that involve
  comparison, I'd vote for ((=) :: a-a-Bool) over (compare ::
  a-a-Ordering) as the comparison function since (=) is often easier to
  create a quick definition for.  I wouldn't consider such a change until
  Haskell 2, though.
 
 I disagree... I don't think we should be making `quick-and-dirty'
 definitions easy, I think we should be doing it the Right Way.  It
 takes two `=' comparisons to get the information obtainable from one
 `compare', but one `compare' is also enough to give a result for `='.
 It usually requires no more computation to give the more specific
 result.

I don't disagree strongly with this, but... sortBy, insertBy, etc. can
almost always be implemented just as easily with (=) as with compare. 
That is, they don't need more calls to (=) than they would calls to
compare.  That means that if compare is more complex than (=), as it
possibly can be, you end up with a less efficient algorithm. 
Admittedly, there may be some algorithms that would make fewer
comparisons if defined in terms of compare rather than (=).

 If you really want quick-and-dirty, you could add:

When I said "quick" I should probably have said "simple"; I didn't mean
"quick-and-dirty".

 le2ord :: (a - a - Bool) - (a - a - Ordering)
 le2ord le a b = case (a `le` b, b `le` a) of
   (True, False) - LT
   (True, True ) - EQ
   (False,True ) - GT
 
 to the prelude (or to an Ordering library).  While you're constructing
 an Ordering library, you could add to it:
 
 isLE :: Ordering - Bool
 isLE LT = True
 isLE EQ = True
 isLE GT = False

Yes, I'm well aware that compare can be defined in terms of (=) and
vice-versa.

 thenCmp :: Ordering - Ordering - Ordering
 EQ `thenCmp` o2 = o2
 o1 `thenCmp` _  = o1

OK, so:
  instance (Ord a) = Ord [a] where
 compare (x:xs) (y:ys) = (compare x y) `thenCmp` (compare xs ys)
 compare [] [] = EQ
 compare [] _  = LT
 compare _  [] = GT

  instance (Ord a) = Ord [a] where
 (x:xs) = (y:ys) = not (y=x) || ((x=y)  (xs=ys))
 (_:_)  = [] = False
 [] = _  = True

I must admit I like the compare version better, and it involves N calls
to compare as opposed to 2N calls to (=).  It's one of those algorithms
I admitted might exist earlier!  thenCmp looks very useful, and thenLE
can't be defined, of course.  I think I'm starting to see the light. :)

 and a partial ordering class
 
 type POrdering = Maybe Ordering
 
 class POrd a where
   pcompare :: a - a - POrdering
 
 instance Ord a = POrd a where
   pcompare a b = Just (compare a b)

I know partial orderings are used in mathematics, but have you seen a
practical use in programming?  I'm genuinely curious.

 Just my £0.02 (about US$0.04 I believe).

It seems I've been out-bid!  ;-)




Re: type of minimumBy

2000-05-18 Thread S.D.Mechveliani

(+), () ...  are different. Because they have classical tradition
to be applied as binary infix operations.
And  gcd, min, max, lcm  have not this "infix" tradition.

--
Sergey Mechveliani
[EMAIL PROTECTED]