Re: [Haskell-cafe] Proof in Haskell

2010-12-23 Thread Sjoerd Visscher

On Dec 21, 2010, at 6:57 PM, austin seipp wrote:

 https://gist.github.com/750279

I took Austins code and modified it to run on a Tree GADT which is 
parameterized by its shape:

https://gist.github.com/752982

Would this count as a function mirror with proof that mirror (mirror x) == x?

--
Sjoerd Visscher



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof in Haskell

2010-12-23 Thread Daniel Peebles
I think it's pretty legit. You aren't actually making a claim about the
values in the tree but I think parametricity handles that for you,
especially since you have existential types for the payload at every tree
level (so you can't go shuffling those around).

The only thing missing (and that you can't change in Haskell) is that your
statement is about Mirror (Mirror x) == x, rather than mirror (mirror x) ==
x. mirror gives you Mirror but there's currently no proof to show that it's
the only way to do it, so there's a missing step there, I think.

Anyway, for those talking about the coinductive proof, I made one in Agda:
http://hpaste.org/42516/mirrormirror

Simulating bottoms wouldn't be too hard, but I don't think the statement is
even true in the presence of bottoms, is it?

Dan

On Thu, Dec 23, 2010 at 9:27 AM, Sjoerd Visscher sjo...@w3future.comwrote:


 On Dec 21, 2010, at 6:57 PM, austin seipp wrote:

  https://gist.github.com/750279

 I took Austins code and modified it to run on a Tree GADT which is
 parameterized by its shape:

 https://gist.github.com/752982

 Would this count as a function mirror with proof that mirror (mirror x) ==
 x?

 --
 Sjoerd Visscher



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof in Haskell

2010-12-23 Thread Ryan Ingram
On Thu, Dec 23, 2010 at 8:19 AM, Daniel Peebles pumpkin...@gmail.com wrote:
 Simulating bottoms wouldn't be too hard, but I don't think the statement is
 even true in the presence of bottoms, is it?

Isn't it?

data Tree a = Tip | Node (Tree a) a (Tree a)

mirror :: Tree a - Tree a
mirror Tip = Tip
mirror (Node x y z) = Node (mirror z) y (mirror x)

--

-- base cases
mirror (mirror _|_)
 = mirror _|_
 = _|_

mirror (mirror Tip)
 = mirror Tip
 = Tip

-- inductive case
mirror (mirror (Node x y z))
  = mirror (Node (mirror z) y (mirror x))
  = Node (mirror (mirror x)) y (mirror (mirror z))
  -- induction
  = Node x y z

  -- ryan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof in Haskell

2010-12-23 Thread Ryan Ingram
On Tue, Dec 21, 2010 at 9:57 AM, austin seipp a...@hacks.yi.org wrote:
 For amusement, I went ahead and actually implemented 'Mirror' as a
 type family, and used a little bit of hackery thanks to GHC to prove
 that indeed, 'mirror x (mirror x) = x' since with a type family we can
 express 'mirror' as a type level function via type families. It uses
 Ryan Ingram's approach to lightweight dependent type programming in
 Haskell.

 https://gist.github.com/750279

Thanks for the shout out :)

I think the type of your proof term is incorrect, though; it requires
the proof to be passed in, which is sort of a circular logic.

The type you want is

proof1 :: forall r x. BinTree x = ((x ~ Mirror (Mirror x)) = x - r) - r
proof1 t k = ...

Alternatively,

data TypeEq a b where Refl :: TypeEq a a
proof1 :: forall x. BinTree x = x - TypeEq (Mirror (Mirror x)) x
proof1 t = ...


Here's an implementation for the latter

newtype P x = P { unP :: TypeEq (Mirror (Mirror x)) x }

baseCase :: P Tip
baseCase = P Refl

inductiveStep :: forall a b c. P a - P c - P (Node a b c)
inductiveStep (P Refl) (P Refl) = P Refl

proof1 t = unP $ treeInduction t baseCase inductiveStep

(I haven't tested this, but I believe it should work)

  -- ryan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof in Haskell

2010-12-23 Thread Daniel Peebles
Fair enough :) that'll teach me to hypothesize something without thinking
about it! I guess I could amend my coinductive proof:

http://hpaste.org/paste/42516/mirrormirror_with_bottom#p42517

does that cover bottom-ness adequately? I can't say I've thought through it
terribly carefully.

On Thu, Dec 23, 2010 at 12:30 PM, Ryan Ingram ryani.s...@gmail.com wrote:

 On Thu, Dec 23, 2010 at 8:19 AM, Daniel Peebles pumpkin...@gmail.com
 wrote:
  Simulating bottoms wouldn't be too hard, but I don't think the statement
 is
  even true in the presence of bottoms, is it?

 Isn't it?

 data Tree a = Tip | Node (Tree a) a (Tree a)

 mirror :: Tree a - Tree a
 mirror Tip = Tip
 mirror (Node x y z) = Node (mirror z) y (mirror x)

 --

 -- base cases
 mirror (mirror _|_)
  = mirror _|_
  = _|_

 mirror (mirror Tip)
  = mirror Tip
  = Tip

 -- inductive case
 mirror (mirror (Node x y z))
  = mirror (Node (mirror z) y (mirror x))
  = Node (mirror (mirror x)) y (mirror (mirror z))
  -- induction
  = Node x y z

  -- ryan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof in Haskell

2010-12-23 Thread Dan Doel
On Thursday 23 December 2010 12:52:07 pm Daniel Peebles wrote:
 Fair enough :) that'll teach me to hypothesize something without thinking
 about it! I guess I could amend my coinductive proof:
 
 http://hpaste.org/paste/42516/mirrormirror_with_bottom#p42517
 
 does that cover bottom-ness adequately? I can't say I've thought through it
 terribly carefully.

That isn't the usual way to model partially defined values. For instance, you 
can write functions with that datatype that would not be monotone:

  pdefined :: {A : Set} - Tree A - Bool
  pdefined ⊥ = false
  pdefined _ = true

It's somewhat more accurate to take the partiality monad:

  data D (A : Set) : Set where
now   : A - D A
later : ∞ (D A) - D A

  ⊥ : {A : Set} - D A
  ⊥ = later (♯ ⊥)

Then we're interested in an equivalence relation on D As where two values are 
equal if they either both diverge, or both converge to equal inhabitants of A 
(not worrying about how many steps each takes to do so).

Then, the partially defined trees are given by:

  mutual {A}
Tree = D Tree'

data Tree' : Set where
  node : Tree - A - Tree - Tree'
  tip  : Tree'

And equivalence of these trees makes use of the above equivalence for D-
wrapped things. (It's actually a little weird that this works, I think, if you 
expect Tree' to be a least fixed point; but Agda does not work that way).

If you're just interested in proving mirror (mirror x) = x, though, this is 
probably overkill. Your original type should be isomorphic to the Haskell type 
in a set theoretic way of thinking, and the definition of mirror does what the 
Haskell function would do at all the values. So the fact that you can write 
functions on that Agda type that would have no corresponding implementation in 
Haskell is kind of irrelevant.

-- Dan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof in Haskell

2010-12-22 Thread Patrick Browne
On 22/12/2010 14:48, Artyom Shalkhakov wrote:
 ..Do you want to prove a property of
 a function formally, using some kind of formal logic?

I am aware that functional languages do not do proofs at term level, but
the motivation for my question is to get a precise reason why this is
so. The replies from the café have clearly articulated the reasons.

Thanks to all,
Pat






This message has been scanned for content and viruses by the DIT Information 
Services E-Mail Scanning Service, and is believed to be clean. http://www.dit.ie

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Proof in Haskell

2010-12-21 Thread Patrick Browne
Hi,
In a previous posting[1] I asked was there a way to achieve a proof of
mirror (mirror x) = x

in Haskell itself. The code for the tree/mirror is below:

 module BTree where
  data Tree a = Tip | Node (Tree a) a (Tree a)

  mirror ::  Tree a - Tree a
  mirror (Node x y z) = Node (mirror z) y (mirror x)
  mirror Tip = Tip

The reply from Eugene Kirpichov was:
 It is not possible at the value level, because Haskell does not
 support dependent types and thus cannot express the type of the
 proposition 
 forall a . forall x:Tree a, mirror (mirror x) = x, 
 and therefore a proof term also cannot be constructed. 

Could anyone explain what *dependent types* mean in this context?
What is the exact meaning of forall a and forall x?

Thanks,
Pat
[1] http://www.mail-archive.com/haskell-cafe@haskell.org/msg64842.html








This message has been scanned for content and viruses by the DIT Information 
Services E-Mail Scanning Service, and is believed to be clean. http://www.dit.ie

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof in Haskell

2010-12-21 Thread aditya siram
I don't know the formal definition, but dependent types seem analogous
to checking an invariant at runtime.
-deech

On Tue, Dec 21, 2010 at 5:53 AM, Patrick Browne patrick.bro...@dit.ie wrote:
 Hi,
 In a previous posting[1] I asked was there a way to achieve a proof of
 mirror (mirror x) = x

 in Haskell itself. The code for the tree/mirror is below:

  module BTree where
  data Tree a = Tip | Node (Tree a) a (Tree a)

  mirror ::  Tree a - Tree a
  mirror (Node x y z) = Node (mirror z) y (mirror x)
  mirror Tip = Tip

 The reply from Eugene Kirpichov was:
 It is not possible at the value level, because Haskell does not
 support dependent types and thus cannot express the type of the
 proposition
 forall a . forall x:Tree a, mirror (mirror x) = x,
 and therefore a proof term also cannot be constructed.

 Could anyone explain what *dependent types* mean in this context?
 What is the exact meaning of forall a and forall x?

 Thanks,
 Pat
 [1] http://www.mail-archive.com/haskell-cafe@haskell.org/msg64842.html








 This message has been scanned for content and viruses by the DIT Information 
 Services E-Mail Scanning Service, and is believed to be clean. 
 http://www.dit.ie

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof in Haskell

2010-12-21 Thread Iavor Diatchki
Hi Patrick,

Indeed, you cannot really write proofs in Haskell because it is just
an ordinary (more or less) programming language and not a theorem
prover. (As an aside: you could write tests, i.e. properties which
may or may not be theorems about your program, and test them on random
data (see QuickCheck), or exhaustively---if you managed to test a
property for all possible inputs, then you have essentially proved
it.).

Now about dependent types.  Some programming languages have very
expressive type systems (even more so then Haskell!) and they allow
for types which are parameterized by values.   Such types are called
dependent types.  Here is an example:

decrement :: (x :: Int) - (if x  0 then Int else String)
decrement x = if x  0 then x - 1 else Cannot decrement

This function maps values of type Int to either Ints or Strings.
 Note that the _type_ of the result of the function depends on the
_value_ of the input, which is why this function has a dependent type.

It turns out---and this is not at all obvious at first---that
languages with dependent types (and some other features) are suitable
not only for writing programs but, also, for proving theorems.
Theorems are expressed as types (often, dependent types), while proofs
are programs inhabiting the type of the theorem.  So, true theorems
correspond to types which have some inhabitants (proofs), while
false theorems correspond to empty types.

The correspondence between proofs-theorems and programs-types is
known as the Curry-Howard isomorphism.  Examples of some languages
which use depend types are Coq, Agda, and Cayenne.

I hope that this helps,
-Iavor

PS: The foralls in your example are just depend function types:  in
this setting, to prove forall (x :: A). P x amounts writing a
function of type: (x :: A) - P x.  In other words, this is a
function, that maps values of type A to proofs of the property.
Because the function can be applied to any value of type A, we have
prove the result forall (x::A). P x).   The dependent type arises
because the property depends on the value in question.





On Tue, Dec 21, 2010 at 8:15 AM, aditya siram aditya.si...@gmail.com wrote:
 I don't know the formal definition, but dependent types seem analogous
 to checking an invariant at runtime.
 -deech

 On Tue, Dec 21, 2010 at 5:53 AM, Patrick Browne patrick.bro...@dit.ie wrote:
 Hi,
 In a previous posting[1] I asked was there a way to achieve a proof of
 mirror (mirror x) = x

 in Haskell itself. The code for the tree/mirror is below:

  module BTree where
  data Tree a = Tip | Node (Tree a) a (Tree a)

  mirror ::  Tree a - Tree a
  mirror (Node x y z) = Node (mirror z) y (mirror x)
  mirror Tip = Tip

 The reply from Eugene Kirpichov was:
 It is not possible at the value level, because Haskell does not
 support dependent types and thus cannot express the type of the
 proposition
 forall a . forall x:Tree a, mirror (mirror x) = x,
 and therefore a proof term also cannot be constructed.

 Could anyone explain what *dependent types* mean in this context?
 What is the exact meaning of forall a and forall x?

 Thanks,
 Pat
 [1] http://www.mail-archive.com/haskell-cafe@haskell.org/msg64842.html








 This message has been scanned for content and viruses by the DIT Information 
 Services E-Mail Scanning Service, and is believed to be clean. 
 http://www.dit.ie

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof in Haskell

2010-12-21 Thread austin seipp
Patrick,

Dependent types are program types that depend on runtime values. That
is, they are essentially a type of the form:

f :: (a :: X) - T

where 'a' is a *value* of type 'X', which is mentioned in the *type* 'T'.

You do not see such things in Haskell, because Haskell separates
values from types - the language of runtime values, and the language
of compile-time values are separate by design. In dependently typed
languages, values and types are in fact, the same terms and part of
the same language, not separated by compiler phases. The distinction
between runtime and compile time becomes blurry at this stage.

By the Curry-Howard isomorphism, we would like to make a proof of some
property, particularly that mirror (mirror x) = x - but by CH, types
are the propositions we must prove, and values are the terms we must
prove them with (such as in first-order or propositional logics.) This
is what Eugene means when he says it is not possible to prove this at
the value level. Haskell cannot directly express the following type:

forall a. forall x:Tree a, mirror (mirror x) = x

We can think of the 'forall' parts of the type as bringing the
variables 'a' and 'x' into scope with fresh names, and they are
universally quantified so that this property is established for any
given value of type 'a'. As you can see, this is a dependent type - we
establish we will be using something of type a, and then we establish
that we also have a *value* x of type 'Tree a', which is mentioned
finally by saying that 'mirror (mirror x) = x'.

Because we cannot directly express the above type (as we cannot mix
runtime values and type level values) we cannot also directly express
a proof of such a proposition directly in Haskell.

However, if we can express such a type - that is, a type which can
encode the 'Mirror' function - it is possible to construct a
value-level term which is a proof of such a proposition. By CH, this
is also possible, only not nearly as palatable because we encode the
definition of 'mirror' at the value level as well as the type level -
again, types and values in haskell exist in different realms. With
dependent types we only need to write 'mirror' *once* because we can
use it at both the type and value level. Also, because dependently
typed languages have more expressive type systems in general, it
quickly becomes a burden to do dependent-type 'tricks' in Haskell,
although some are manageable.

For amusement, I went ahead and actually implemented 'Mirror' as a
type family, and used a little bit of hackery thanks to GHC to prove
that indeed, 'mirror x (mirror x) = x' since with a type family we can
express 'mirror' as a type level function via type families. It uses
Ryan Ingram's approach to lightweight dependent type programming in
Haskell.

https://gist.github.com/750279

As you can see, it is quite unsatisfactory since we must encode Mirror
at the type level, as well as 'close off' a typeclass to constrain the
values over which we can make our proof (in GHC, typeclasses are
completely open and can have many nonsensical instances. We could make
an instance for, say 'Mirror Int = Tip', for example, which breaks our
proof. Ryan's trick is to encode 'closed' type classes by using a type
class that implements a form of 'type case', and any instances must
prove that the type being instantiated is either of type 'Tip' or type
'Node' by parametricity.) And well, the tree induction step is just a
little bit painful, isn't it?

So that's an answer, it's just probably not what you wanted.

However, at one point I wrote about proving exactly such a thing (your
exact code, coincidentally) in Haskell, only using an 'extraction
tool' that extracts Isabelle theories from Haskell source code,
allowing you to prove such properties with an automated theorem
prover.

http://hacks.yi.org/~as/posts/2009-04-20-A-little-fun.html

Of course, this depends on the extractor tool itself being correct -
if it is not, it could incorrectly translate your Haskell to similar
but perhaps subtly wrong Isabelle code, and then you'll be proving the
wrong thing! Haskabelle also does support much more than Haskell98 if
I remember correctly, although that may have changed. I also remember
reading that Galois has a project of having 'provable haskell' using
Isabelle, but I can't verify that I suppose.

On Tue, Dec 21, 2010 at 5:53 AM, Patrick Browne patrick.bro...@dit.ie wrote:
 Hi,
 In a previous posting[1] I asked was there a way to achieve a proof of
 mirror (mirror x) = x

 in Haskell itself. The code for the tree/mirror is below:

  module BTree where
  data Tree a = Tip | Node (Tree a) a (Tree a)

  mirror ::  Tree a - Tree a
  mirror (Node x y z) = Node (mirror z) y (mirror x)
  mirror Tip = Tip

 The reply from Eugene Kirpichov was:
 It is not possible at the value level, because Haskell does not
 support dependent types and thus cannot express the type of the
 proposition
 forall a . forall x:Tree a, mirror (mirror x) = x,
 and 

Re: [Haskell-cafe] Proof in Haskell

2010-12-21 Thread Felipe Almeida Lessa
On Tue, Dec 21, 2010 at 3:57 PM, austin seipp a...@hacks.yi.org wrote:
 However, at one point I wrote about proving exactly such a thing (your
 exact code, coincidentally) in Haskell, only using an 'extraction
 tool' that extracts Isabelle theories from Haskell source code,
 allowing you to prove such properties with an automated theorem
 prover.

You may also write your program, for example, using Coq and then extract
correct Haskell code from it.  I'm far from a Coq expert so there must be a
better way, but the following works =):

Inductive Tree (A : Type) :=
  | Tip : Tree A
  | Node : Tree A - A - Tree A - Tree A.

Fixpoint mirror {A : Type} (x : Tree A) : Tree A :=
  match x with
  | Tip = Tip A
  | Node l v r = Node A (mirror r) v (mirror l)
  end.

Theorem mirror_mirror : forall A (x : Tree A), mirror (mirror x) = x.
  induction x; simpl; auto.
  rewrite IHx1; rewrite IHx2; trivial.
Qed.

Extraction Language Haskell.
Recursive Extraction mirror.

Then Coq generates the following correct-proven code:

data Tree a = Tip
| Node (Tree a) a (Tree a)

mirror :: (Tree a1) - Tree a1
mirror x =
  case x of
Tip - Tip
Node l v r - Node (mirror r) v (mirror l)

Cheers! =)

-- 
Felipe.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof in Haskell

2010-12-21 Thread Daniel Fischer
On Tuesday 21 December 2010 19:34:11, Felipe Almeida Lessa wrote:

 Theorem mirror_mirror : forall A (x : Tree A), mirror (mirror x) = x.
   induction x; simpl; auto.
   rewrite IHx1; rewrite IHx2; trivial.
 Qed.

Since mirror (mirror x) = x doesn't hold in Haskell, I take it that Coq 
doesn't allow infinite structures?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof in Haskell

2010-12-21 Thread Daniel Fischer
I wrote:

 On Tuesday 21 December 2010 19:34:11, Felipe Almeida Lessa wrote:

 Theorem mirror_mirror : forall A (x : Tree A), mirror (mirror x) = x.
   induction x; simpl; auto.
   rewrite IHx1; rewrite IHx2; trivial.
 Qed.

 Since mirror (mirror x) = x doesn't hold in Haskell, I take it that Coq 
 doesn't allow infinite structures?

Oops, mirroring infinite binary trees should work. Ignore above.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof in Haskell

2010-12-21 Thread Colin Paul Adams
 Daniel == Daniel Fischer daniel.is.fisc...@googlemail.com writes:

Daniel On Tuesday 21 December 2010 19:34:11, Felipe Almeida Lessa wrote:
 
 Theorem mirror_mirror : forall A (x : Tree A), mirror (mirror x) = x.
 induction x; simpl; auto.  rewrite IHx1; rewrite IHx2; trivial.
 Qed.

Daniel Since mirror (mirror x) = x doesn't hold in Haskell, I take
Daniel it that Coq doesn't allow infinite structures?

Why doesn't it hold?
-- 
Colin Adams
Preston Lancashire
()  ascii ribbon campaign - against html e-mail
/\  www.asciiribbon.org   - against proprietary attachments

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof in Haskell

2010-12-21 Thread Daniel Fischer
On Tuesday 21 December 2010 20:11:37, Colin Paul Adams wrote:
  Daniel == Daniel Fischer daniel.is.fisc...@googlemail.com
  writes:

 Daniel On Tuesday 21 December 2010 19:34:11, Felipe Almeida Lessa 
wrote:
  Theorem mirror_mirror : forall A (x : Tree A), mirror (mirror x) = x.
 
  induction x; simpl; auto.  rewrite IHx1; rewrite IHx2; trivial.
  Qed.

 Daniel Since mirror (mirror x) = x doesn't hold in Haskell, I take
 Daniel it that Coq doesn't allow infinite structures?

 Why doesn't it hold?

Hit send before I finished thinking, it should hold.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof in Haskell

2010-12-21 Thread Brandon Moore




- Original Message 
 From: Colin Paul Adams co...@colina.demon.co.uk
 To: Daniel Fischer daniel.is.fisc...@googlemail.com
 Cc: haskell-cafe@haskell.org
 Sent: Tue, December 21, 2010 1:11:37 PM
 Subject: Re: [Haskell-cafe] Proof in Haskell
 
  Daniel == Daniel Fischer daniel.is.fisc...@googlemail.com  writes:
 
 Daniel On Tuesday 21 December 2010 19:34:11,  Felipe Almeida Lessa wrote:
  
  Theorem  mirror_mirror : forall A (x : Tree A), mirror (mirror x) = x.
   induction x; simpl; auto.  rewrite IHx1; rewrite IHx2;  trivial.
  Qed.
 
 Daniel Since  mirror (mirror x) = x doesn't hold in Haskell, I take
  Daniel it that Coq doesn't allow infinite structures?
 
 Why doesn't it  hold?

You are both onto something. It is true, but the Coq proof only covered
finite trees. Infinite tree could be defined with coinduction, but even that
doesn't allow the possibility of bottoms in the tree.

CoInductive Tree (A:Set) :=
  Branch (l r : Tree A) | Leaf.

CoFixpoint mirror {A:Set} (x : Tree A) : Tree A :=
  match x with
  | Leaf = Leaf A
  | Branch l r = Branch A (mirror r) (mirror l)
  end.

Also, you'd have to define some notion of Bisimilarity rather than working
with the direct equality.

CoInductive bisimilar {A : Set} : Tree A - Tree A - Prop :=
  | bisim_Leaf : bisimilar (Leaf A) (Leaf A)
  | bisim_Branch (l1 r1 l2 r2 : Tree A) : bisimilar l1 l2 - bisimilar r1 r2 - 
bisimilar (Branch A l1 r1) (Branch A l2 r2).

I was hoping to write a proof, but it's much more annoying to work with 
coinductive definitions.

Brandon



  

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof in Haskell

2010-12-21 Thread Felipe Almeida Lessa
On Tue, Dec 21, 2010 at 5:02 PM, Daniel Fischer
daniel.is.fisc...@googlemail.com wrote:
 On Tuesday 21 December 2010 19:34:11, Felipe Almeida Lessa wrote:

 Theorem mirror_mirror : forall A (x : Tree A), mirror (mirror x) = x.
   induction x; simpl; auto.
   rewrite IHx1; rewrite IHx2; trivial.
 Qed.

 Since mirror (mirror x) = x doesn't hold in Haskell, I take it that Coq
 doesn't allow infinite structures?

Even if the theorem does hold with infinite structures, I don't really
know the answer to your question.  I'm just starting to study Coq in
my free time =).  My guess would be no, because functions need to be
total.  But that's just a wild guess, and you shouldn't take my word
for it =).

Cheers!

-- 
Felipe.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Proof in Haskell

2009-09-25 Thread pat browne
Hi,
Below is a function that returns a mirror of a tree, originally from:

http://www.nijoruj.org/~as/2009/04/20/A-little-fun.html

where it was used to demonstrate the use of Haskabelle(1) which converts
Haskell programs to the Isabelle theorem prover. Isabelle was used to
show that the Haskell implementation of mirror is a model for the axiom:

 mirror (mirror x) = x

My question is this:
Is there any way to achieve such a proof in Haskell itself?
GHC appears to reject equations such has
mirror (mirror x) = x
mirror (mirror(Node x y z)) = Node x y z


Regards,
Pat


=CODE=
module BTree where

data Tree a = Tip
| Node (Tree a) a (Tree a)

mirror ::  Tree a - Tree a
mirror (Node x y z) = Node (mirror z) y (mirror x)
mirror Tip = Tip

(1)Thanks to John Ramsdell for the link to Haskabelle:
http://www.cl.cam.ac.uk/research/hvg/Isabelle/haskabelle.html).

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof in Haskell

2009-09-25 Thread Eugene Kirpichov
It is not possible at the value level, because Haskell does not
support dependent types and thus cannot express the type of the
proposition forall a . forall x:Tree a, mirror (mirror x) = x, and
therefore a proof term also cannot be constructed.

However, if you manage to express those trees at type level, probably
with typeclasses and type families, you might have some success.

2009/9/25 pat browne patrick.bro...@comp.dit.ie:
 Hi,
 Below is a function that returns a mirror of a tree, originally from:

 http://www.nijoruj.org/~as/2009/04/20/A-little-fun.html

 where it was used to demonstrate the use of Haskabelle(1) which converts
 Haskell programs to the Isabelle theorem prover. Isabelle was used to
 show that the Haskell implementation of mirror is a model for the axiom:

  mirror (mirror x) = x

 My question is this:
 Is there any way to achieve such a proof in Haskell itself?
 GHC appears to reject equations such has
 mirror (mirror x) = x
 mirror (mirror(Node x y z)) = Node x y z


 Regards,
 Pat


 =CODE=
 module BTree where

 data Tree a = Tip
            | Node (Tree a) a (Tree a)

 mirror ::  Tree a - Tree a
 mirror (Node x y z) = Node (mirror z) y (mirror x)
 mirror Tip = Tip

 (1)Thanks to John Ramsdell for the link to Haskabelle:
 http://www.cl.cam.ac.uk/research/hvg/Isabelle/haskabelle.html).

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Eugene Kirpichov
Web IR developer, market.yandex.ru
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof in Haskell

2009-09-25 Thread Paul Johnson
One alternative approach is to use QuickCheck to test many trees and 
look for counter-examples.  You can also use SmallCheck to do an 
exhaustive check up to a chosen size of tree.


To do this with QuickCheck you would write a function such as

   prop_mirror :: Node a - Bool
   prop_mirror x = (mirror (mirror x)) == x

You would also need to define an instance of Arbitrary for Node to 
create random values of the Node type.  Then you can call:


   quickCheck prop_mirror

and it will call the prop_mirror function with 100 random test cases.  
Not the formal proof you wanted, but still very effective at finding bugs.




On 25/09/09 12:14, pat browne wrote:

Hi,
Below is a function that returns a mirror of a tree, originally from:

http://www.nijoruj.org/~as/2009/04/20/A-little-fun.html

where it was used to demonstrate the use of Haskabelle(1) which converts
Haskell programs to the Isabelle theorem prover. Isabelle was used to
show that the Haskell implementation of mirror is a model for the axiom:

  mirror (mirror x) = x

My question is this:
Is there any way to achieve such a proof in Haskell itself?
GHC appears to reject equations such has
mirror (mirror x) = x
mirror (mirror(Node x y z)) = Node x y z


Regards,
Pat


=CODE=
module BTree where

data Tree a = Tip
 | Node (Tree a) a (Tree a)

mirror ::  Tree a -  Tree a
mirror (Node x y z) = Node (mirror z) y (mirror x)
mirror Tip = Tip

(1)Thanks to John Ramsdell for the link to Haskabelle:
http://www.cl.cam.ac.uk/research/hvg/Isabelle/haskabelle.html).

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

   


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-13 Thread Dan Doel
On Wednesday 12 November 2008 7:05:02 pm Jonathan Cast wrote:
 I think the point is that randomIO is non-deterministic (technically,
 pseudo-random) but causal --- the result is completely determined by
 events that precede its completion.  unsafeInterleaveIO, by contrast, is
 arguably (sometimes) deterministic --- but is, regardless, definitely
 not causal.

Sure, it's weird if you actually know how it's allegedly making its decisions 
(predicting the future, but not really). But all you can actually observe is 
that sometimes it gives (1,1), and sometimes it gives (2,1), and maybe it 
coincidentally always seems to give (1,1) to f, but to g, which is 
observationally equal to g, excepting bottoms, it always seems to give (2,1). 
That's a weird situation, but if you make things opaque enough, I think you 
can fall back on IO's nondeterminism.

Another weird situation (it seems to me) goes like:

foo = unsafeInterleaveIO $ fail foo

do x - foo
   putStrLn bar
   return $! x

where you can't simply fall back to saying, foo throws an exception, because 
it doesn't prevent bar from printing. So what is actually throwing an 
exception is some later IO action that doesn't have any relation to foo 
besides using a 'pure' result from it. However, I suppose you can push off such 
oddities in a similar way, saying that IO actions can throw delayed, 
asynchronous exceptions, and what do you know, it always happens to end up in 
the action that evaluated x. :)

Of course, those sorts of explanations might be good enough to conclude that 
unsafeInterleaveIO doesn't break referential transparency, but they may make 
for a very poor operational semantics of IO and such. foo throws some delayed 
asynchronous exception? Well, that doesn't tell when or how. And it doesn't 
explain how weirdTuple 'nondeterministically' chooses between (1,1) and (2,1). 
So that would probably lead you to another semantics (a more operational one) 
that brings up how weirdTuple reads the future, or even that mutation happens 
as a result of evaluating pure values. But, the point is, I think, that such a 
semantics doesn't indicate that referential transparency is being violated 
(which would be a notion from another sort of semantics where there's no way 
to notice something weird is going on).

Anyhow, I'm done rambling. (Hope it didn't go on too long; I know we weren't 
fundamentally in any kind of disagreement.) :)

Cheers,
-- Dan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Andrew Birkett

Hi,

Is a formal proof that the Haskell language is referentially 
transparent?  Many people state haskell is RT without backing up that 
claim.  I know that, in practice, I can't write any counter-examples but 
that's a bit handy-wavy.  Is there a formal proof that, for all possible 
haskell programs, we can replace coreferent expressions without changing 
the meaning of a program?


(I was writing a blog post about the origins of the phrase 
'referentially transparent' and it made me think about this)


Andrew

--
- http://www.nobugs.org -
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Martin Sulzmann

Lennart Augustsson wrote:

You can't write a straightforward dynamic semantics (in, say,
denotational style) for Haskell.
The problem is that with type classes you need to know the types
compute the values.
You could use a two step approach: first make a static semantics that
does type inference/checking and translates Haskell into a different
form that has resolved all overloading.
And, secondly, you can write a dynamic semantics for that language.

But since the semantics has to have the type inference engine inside
it, it's going to be a pain.

It's possible that there's some more direct approach that represents
types as some kind of runtime values, but nobody (to my knowledge) has
done that.

  

This has been done. For example, check out the type class semantics given in

Thatte, Semantics of type classes revisited, LFP '94
Stuckey and Sulzmann, A Theory of Overloading, TOPLAS'05

Harrison: A Simple Semantics for Polymorphic Recursion. APLAS 2005
is also related I think.

The ICFP'08 poster

Unified Type Checking for Type Classes and Type Families , Tom 
Schrijvers and Martin Sulzmann


suggests that a type-passing semantics can even be programmed by 
(mis)using type families.


- Martin



  -- Lennart

On Wed, Nov 12, 2008 at 12:39 PM, Luke Palmer [EMAIL PROTECTED] wrote:
  

On Wed, Nov 12, 2008 at 3:21 AM, Jules Bean [EMAIL PROTECTED] wrote:


Andrew Birkett wrote:
  

Hi,

Is a formal proof that the Haskell language is referentially transparent?
 Many people state haskell is RT without backing up that claim.  I know
that, in practice, I can't write any counter-examples but that's a bit
handy-wavy.  Is there a formal proof that, for all possible haskell
programs, we can replace coreferent expressions without changing the meaning
of a program?


The (well, a natural approach to a) formal proof would be to give a formal
semantics for haskell.
  

Haskell 98 does not seem that big to me (it's not teeny, but it's
nothing like C++), yet we are continually embarrassed about not having
any formal semantics.  What are the challenges preventing its
creation?

Luke
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
  


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Mitchell, Neil

 It's possible that there's some more direct approach that 
 represents types as some kind of runtime values, but nobody 
 (to my knowledge) has done that.

It don't think its possible - I tried it and failed.

Consider:

show (f [])

Where f has the semantics of id, but has either the return type [Int] or
[Char] - you get different results. Without computing the types
everywhere, I don't see how you can determine the precise type of [].

Thanks

Neil

 On Wed, Nov 12, 2008 at 12:39 PM, Luke Palmer 
 [EMAIL PROTECTED] wrote:
  On Wed, Nov 12, 2008 at 3:21 AM, Jules Bean 
 [EMAIL PROTECTED] wrote:
  Andrew Birkett wrote:
 
  Hi,
 
  Is a formal proof that the Haskell language is 
 referentially transparent?
   Many people state haskell is RT without backing up 
 that claim.  I 
  know that, in practice, I can't write any counter-examples but 
  that's a bit handy-wavy.  Is there a formal proof that, for all 
  possible haskell programs, we can replace coreferent expressions 
  without changing the meaning of a program?
 
  The (well, a natural approach to a) formal proof would be 
 to give a 
  formal semantics for haskell.
 
  Haskell 98 does not seem that big to me (it's not teeny, but it's 
  nothing like C++), yet we are continually embarrassed about 
 not having 
  any formal semantics.  What are the challenges preventing its 
  creation?
 
  Luke
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 

==
Please access the attached hyperlink for an important electronic communications 
disclaimer: 

http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html
==

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Jake Mcarthur

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On Nov 12, 2008, at 7:09 AM, Lennart Augustsson wrote:


It's possible that there's some more direct approach that represents
types as some kind of runtime values, but nobody (to my knowledge) has
done that.


I think JHC passes types at runtime, using them in C switch statements  
for overloading, but I don't know if an implementation like that is  
really what we would need for this.


- - Jake
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.8 (Darwin)

iEYEARECAAYFAkka6LcACgkQye5hVyvIUKnlfwCgmpcvghHK9lYd3XwK7sfwARnM
xlYAoLXxw3sz4CNaThaNV9GGsX4ALR/L
=0q26
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Jules Bean

Andrew Birkett wrote:

Hi,

Is a formal proof that the Haskell language is referentially 
transparent?  Many people state haskell is RT without backing up that 
claim.  I know that, in practice, I can't write any counter-examples but 
that's a bit handy-wavy.  Is there a formal proof that, for all possible 
haskell programs, we can replace coreferent expressions without changing 
the meaning of a program?


The (well, a natural approach to a) formal proof would be to give a 
formal semantics for haskell.


Referential transparency would be an obvious property of the semantics. 
Soundness would show that it carried over to the language.


Jules
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Martin Sulzmann


Correct Lennart. The below mentioned papers assume some
evidence translation of type class programs. If you need/want
a direct semantics/translation of programs you'll need to
impose some restrictions on the set of allowable type class programs.

For such an approach see

  Martin Odersky, Philip Wadler, Martin Wehr: A Second Look at 
Overloading. FPCA 1995:


Roughly, the restriction says, you can overload the argument but not the 
result (types).


- Martin


Lennart Augustsson wrote:

I had a quick look at Stuckey and Sulzmann, A Theory of Overloading
and it looks to me like the semantics is given by evidence
translation.  So first you do evidence translation, and then give
semantics to the translated program.  So this looks like the two step
approach I first mentioned.
Or have I misunderstood what you're doing?

What I meant hasn't been done is a one step semantics directly in
terms of the source language.
I guess I also want it to be compositional, which I think is where
things break down.

  -- Lennart

On Wed, Nov 12, 2008 at 2:26 PM, Martin Sulzmann
[EMAIL PROTECTED] wrote:
  

Lennart Augustsson wrote:


You can't write a straightforward dynamic semantics (in, say,
denotational style) for Haskell.
The problem is that with type classes you need to know the types
compute the values.
You could use a two step approach: first make a static semantics that
does type inference/checking and translates Haskell into a different
form that has resolved all overloading.
And, secondly, you can write a dynamic semantics for that language.

But since the semantics has to have the type inference engine inside
it, it's going to be a pain.

It's possible that there's some more direct approach that represents
types as some kind of runtime values, but nobody (to my knowledge) has
done that.


  

This has been done. For example, check out the type class semantics given in

Thatte, Semantics of type classes revisited, LFP '94
Stuckey and Sulzmann, A Theory of Overloading, TOPLAS'05

Harrison: A Simple Semantics for Polymorphic Recursion. APLAS 2005
is also related I think.

The ICFP'08 poster

Unified Type Checking for Type Classes and Type Families , Tom Schrijvers
and Martin Sulzmann

suggests that a type-passing semantics can even be programmed by (mis)using
type families.

- Martin




 -- Lennart

On Wed, Nov 12, 2008 at 12:39 PM, Luke Palmer [EMAIL PROTECTED] wrote:

  

On Wed, Nov 12, 2008 at 3:21 AM, Jules Bean [EMAIL PROTECTED]
wrote:



Andrew Birkett wrote:

  

Hi,

Is a formal proof that the Haskell language is referentially
transparent?
 Many people state haskell is RT without backing up that claim.  I
know
that, in practice, I can't write any counter-examples but that's a bit
handy-wavy.  Is there a formal proof that, for all possible haskell
programs, we can replace coreferent expressions without changing the
meaning
of a program?



The (well, a natural approach to a) formal proof would be to give a
formal
semantics for haskell.

  

Haskell 98 does not seem that big to me (it's not teeny, but it's
nothing like C++), yet we are continually embarrassed about not having
any formal semantics.  What are the challenges preventing its
creation?

Luke
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

  


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Lennart Augustsson
I had a quick look at Stuckey and Sulzmann, A Theory of Overloading
and it looks to me like the semantics is given by evidence
translation.  So first you do evidence translation, and then give
semantics to the translated program.  So this looks like the two step
approach I first mentioned.
Or have I misunderstood what you're doing?

What I meant hasn't been done is a one step semantics directly in
terms of the source language.
I guess I also want it to be compositional, which I think is where
things break down.

  -- Lennart

On Wed, Nov 12, 2008 at 2:26 PM, Martin Sulzmann
[EMAIL PROTECTED] wrote:
 Lennart Augustsson wrote:

 You can't write a straightforward dynamic semantics (in, say,
 denotational style) for Haskell.
 The problem is that with type classes you need to know the types
 compute the values.
 You could use a two step approach: first make a static semantics that
 does type inference/checking and translates Haskell into a different
 form that has resolved all overloading.
 And, secondly, you can write a dynamic semantics for that language.

 But since the semantics has to have the type inference engine inside
 it, it's going to be a pain.

 It's possible that there's some more direct approach that represents
 types as some kind of runtime values, but nobody (to my knowledge) has
 done that.



 This has been done. For example, check out the type class semantics given in

 Thatte, Semantics of type classes revisited, LFP '94
 Stuckey and Sulzmann, A Theory of Overloading, TOPLAS'05

 Harrison: A Simple Semantics for Polymorphic Recursion. APLAS 2005
 is also related I think.

 The ICFP'08 poster

 Unified Type Checking for Type Classes and Type Families , Tom Schrijvers
 and Martin Sulzmann

 suggests that a type-passing semantics can even be programmed by (mis)using
 type families.

 - Martin


  -- Lennart

 On Wed, Nov 12, 2008 at 12:39 PM, Luke Palmer [EMAIL PROTECTED] wrote:


 On Wed, Nov 12, 2008 at 3:21 AM, Jules Bean [EMAIL PROTECTED]
 wrote:


 Andrew Birkett wrote:


 Hi,

 Is a formal proof that the Haskell language is referentially
 transparent?
  Many people state haskell is RT without backing up that claim.  I
 know
 that, in practice, I can't write any counter-examples but that's a bit
 handy-wavy.  Is there a formal proof that, for all possible haskell
 programs, we can replace coreferent expressions without changing the
 meaning
 of a program?


 The (well, a natural approach to a) formal proof would be to give a
 formal
 semantics for haskell.


 Haskell 98 does not seem that big to me (it's not teeny, but it's
 nothing like C++), yet we are continually embarrassed about not having
 any formal semantics.  What are the challenges preventing its
 creation?

 Luke
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Janis Voigtlaender

Andrew Birkett wrote:

Hi,

Is a formal proof that the Haskell language is referentially 
transparent?  Many people state haskell is RT without backing up that 
claim.  I know that, in practice, I can't write any counter-examples but 
that's a bit handy-wavy.  Is there a formal proof that, for all possible 
haskell programs, we can replace coreferent expressions without changing 
the meaning of a program?


(I was writing a blog post about the origins of the phrase 
'referentially transparent' and it made me think about this)


Answering this question, or actually even formalizing the statement you
want to prove, is more or less equivalent to writing down a full
denotational candidate semantics for Haskell in a compositional style,
and proving that it is equivalent to the *actual* semantics of Haskell

Nobody has done this. Well, there is no *actual* semantics anywhere
around to which you one could prove equivalence.

So, to be precise, the question you are interested in cannot even really
be asked at the moment.

Ciao, Janis.

--
Dr. Janis Voigtlaender
http://wwwtcs.inf.tu-dresden.de/~voigt/
mailto:[EMAIL PROTECTED]
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Edsko de Vries
See What is a purely functional language by Sabry. Not quite a  
formal proof about *Haskell*, but then we would first need a formal  
semantics of Haskell to be able to do that proof ;-)


On 12 Nov 2008, at 10:11, Andrew Birkett wrote:


Hi,

Is a formal proof that the Haskell language is referentially  
transparent?  Many people state haskell is RT without backing up  
that claim.  I know that, in practice, I can't write any counter- 
examples but that's a bit handy-wavy.  Is there a formal proof that,  
for all possible haskell programs, we can replace coreferent  
expressions without changing the meaning of a program?


(I was writing a blog post about the origins of the phrase  
'referentially transparent' and it made me think about this)


Andrew

--
- http://www.nobugs.org -
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Thomas Davie


On 12 Nov 2008, at 11:11, Andrew Birkett wrote:


Hi,

Is a formal proof that the Haskell language is referentially  
transparent?  Many people state haskell is RT without backing up  
that claim.  I know that, in practice, I can't write any counter- 
examples but that's a bit handy-wavy.  Is there a formal proof that,  
for all possible haskell programs, we can replace coreferent  
expressions without changing the meaning of a program?


(I was writing a blog post about the origins of the phrase  
'referentially transparent' and it made me think about this)


I think the informal proof goes along the lines of because that's  
what the spec says -- Haskell's operational semantics are not  
specified in the report, only IIRC a wooly description of having some  
sort of non-strict beta-reduction behavior.


Bob
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Ryan Ingram
On a totally non-theory side, Haskell isn't referentially transparent.
 In particular, any code that calls unsafePerformIO or
unsafeInterleaveIO is not necessarily RT.  Given that getContents and
interact use unsafeInterleaveIO, this includes most toy programs as
well as almost every non-toy program; almost all use unsafePerformIO
intentionally.

That said, the situations in which these functions are RT are not that
hard to meet, but they would muddy up any formal proof significantly.

Personally, I'm happy with the hand-wavy proof that goes like this:

1) pure functions don't have side effects.
2) side-effect free functions can be duplicated or shared without
affecting the results of the program that uses them (in the absence of
considerations of resource limitation like running out of memory)
3) any function that only uses other pure functions is pure
4) all Haskell98 functions that don't use unsafe* are pure
5) therefore Haskell98 minus unsafe functions is referentially transparent.

Note that I am including I/O actions as pure when observed as the
GHC implementation of IO a ~~ World - (World, a); the function result
that is the value of main is pure.  It is only the observation of
that function by the runtime that causes side effects.

  -- ryan

On Wed, Nov 12, 2008 at 2:11 AM, Andrew Birkett [EMAIL PROTECTED] wrote:
 Hi,

 Is a formal proof that the Haskell language is referentially transparent?
  Many people state haskell is RT without backing up that claim.  I know
 that, in practice, I can't write any counter-examples but that's a bit
 handy-wavy.  Is there a formal proof that, for all possible haskell
 programs, we can replace coreferent expressions without changing the meaning
 of a program?

 (I was writing a blog post about the origins of the phrase 'referentially
 transparent' and it made me think about this)

 Andrew

 --
 - http://www.nobugs.org -
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Luke Palmer
On Wed, Nov 12, 2008 at 3:21 AM, Jules Bean [EMAIL PROTECTED] wrote:
 Andrew Birkett wrote:

 Hi,

 Is a formal proof that the Haskell language is referentially transparent?
  Many people state haskell is RT without backing up that claim.  I know
 that, in practice, I can't write any counter-examples but that's a bit
 handy-wavy.  Is there a formal proof that, for all possible haskell
 programs, we can replace coreferent expressions without changing the meaning
 of a program?

 The (well, a natural approach to a) formal proof would be to give a formal
 semantics for haskell.

Haskell 98 does not seem that big to me (it's not teeny, but it's
nothing like C++), yet we are continually embarrassed about not having
any formal semantics.  What are the challenges preventing its
creation?

Luke
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Benja Fallenstein
Hi all,

On Wed, Nov 12, 2008 at 2:09 PM, Lennart Augustsson
[EMAIL PROTECTED]wrote:

 You can't write a straightforward dynamic semantics (in, say,
 denotational style) for Haskell.
 The problem is that with type classes you need to know the types
 compute the values.

...

 It's possible that there's some more direct approach that represents
 types as some kind of runtime values, but nobody (to my knowledge) has
 done that.


It seems to me that if you don't need your semantics to do the work of
type-*checking* (i.e., you don't care if the rules assign some strange
denotation to an ill-typed term), there is a natural approach to giving
semantics to 98-style type classes. I'm figuring out the details as I type,
though; if anybody sees anything that does not work, please do tell!

Let type mean a non-overloaded type expression (i.e., not containing
variables). I'll suppose that you already know the set of types in the
program and the domain associated with each type. (This is usual in
denotational treatments, right?) Let the domains be such that all sets of
elements have a join (i.e., complete lattices). Define the domain of
overloaded values to be the domain of functions from types to values of
these types or bottom (when I say bottom in the following, it is this
bottom, which is below the domain's bottom), i.e., the product of the
liftings of the domains of all possible types. The interpretation of
bottom is, this overloaded value doesn't have an instance at this type
(e.g., [7,9] is bottom at type Bool).

An *environment* is a function from symbols to overloaded values. The
denotation of an expression is a function from environments to overloaded
values; the denotation of a definition list is a function from environments
to environments; the denotation of a whole program is the least fixed point
of the definition list that makes up the program.

Expressions are interpreted as follows (update :: Symbol - OverloadedValue
- Environment - Environment is defined in the natural way):

[[x]] = \env type. env x type
[[T :: TYPE]] = \env type. if type instance of TYPE then [[T]] env type else
bottom
[[F X]] = Join_type2. \env type1. [[F]] env (type1 - type2) $ [[X]] env
type2
[[\x. T]] = \env type. case type of
(type1 - type2) - \val. [[T]] (update x (mono type1 val) env) type2
otherwise - bottom
  where mono ty val = \ty'. if ty == ty' then val else bottom
[[let x=S in T]] = \env type.
[[T]] (fix $ \env'. update x ([[S]] env') env) type

Simple definitions are interpreted in the obvious way:

[[x = T]] = \env sym. if sym == x then ([[T]] env) else (env sym)
[[x :: Ty; x = T]] = [[x = T :: Ty]]

Finally, instance declarations are interpreted in a special way. To
interpret the declaration

instance C Ty1 .. Tyn where
...; xi = Ti; ...

we need to know the set of possible types for xi. (No type inference should
be necessary -- the class declaration + the Ty1 .. Tyn from the instance
declaration should give us all the necessary information, right?) Call this
set Types. Then,

[[xi = Ti]] = \env sym type. if sym == xi  type in Types then [[T]] env
type else env sym type

That is, an instance declaration sets the value of a symbol at some types,
and leaves the values at the other types alone.

Some notes:

* If T is a well-typed term and type is *not* a type instance of that term,
then ([[T]] env type) is bottom.

* In particular, [[T :: TYPE]] env type = [[T]] env type, when type is an
instance of TYPE; otherwise, [[T :: TYPE]] env type = bottom.

* Application is supposed to be strict in bottom: bottom x = bottom; f
bottom = bottom.

* In interpreting [[F X]], we take the join over all possible types of X
(type2). If X is monomorphic, then ([[X]] env type2) is bottom for all
types except the correct one, so the join gives the correct denotation. If X
is polymorphic, but the type of F together with the type forced by the
context of (F X) together determine what type of X must be used, then either
([[F]] env (type1 - type2)) or ([[X]] env type2) will be bottom for every
type2 exept for the one we want to use; so the application ([[F]] ... $
[[X]] ...) will be bottom except for the correct type; so again, the join
will give the correct denotation. (Caveat lector: I haven't proved this, I'm
running on intuition :))

* In the interpretation of expressions like (show (read 5)), the join is
non-degenerate: it consists of the join of 5, 5.0 etc. But since such
expressions are rejected by the type-checker, we don't care.

* Lets containing multiple definitions can be translated as in the Haskell
report, if we add an interpretation for a case construct (I'm too lazy to
try):
http://www.haskell.org/onlinereport/exps.html#sect3.12

* In the interpretation of the lambda expression, we need to interpret the
body of the expression separately for every type of the lambda expression;
the function 'mono' converts the monomorphic parameter value into an
OverloadedValue that is bottom everywhere except at the 

Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Lennart Augustsson
Actually, unsafeInterleaveIO is perfectly fine from a RT point of view.
Since you are talking to the outside world any behaviour is acceptable.
All the weird interactions between getContents and writing the same
file from the same program could, in principle, happen if a different
program wrote the file.
The unsafePerformIO function can break RT, but I think everyone is
aware of that. :)

  -- Lennart

On Wed, Nov 12, 2008 at 6:47 PM, Ryan Ingram [EMAIL PROTECTED] wrote:
 On a totally non-theory side, Haskell isn't referentially transparent.
  In particular, any code that calls unsafePerformIO or
 unsafeInterleaveIO is not necessarily RT.  Given that getContents and
 interact use unsafeInterleaveIO, this includes most toy programs as
 well as almost every non-toy program; almost all use unsafePerformIO
 intentionally.

 That said, the situations in which these functions are RT are not that
 hard to meet, but they would muddy up any formal proof significantly.

 Personally, I'm happy with the hand-wavy proof that goes like this:

 1) pure functions don't have side effects.
 2) side-effect free functions can be duplicated or shared without
 affecting the results of the program that uses them (in the absence of
 considerations of resource limitation like running out of memory)
 3) any function that only uses other pure functions is pure
 4) all Haskell98 functions that don't use unsafe* are pure
 5) therefore Haskell98 minus unsafe functions is referentially transparent.

 Note that I am including I/O actions as pure when observed as the
 GHC implementation of IO a ~~ World - (World, a); the function result
 that is the value of main is pure.  It is only the observation of
 that function by the runtime that causes side effects.

  -- ryan

 On Wed, Nov 12, 2008 at 2:11 AM, Andrew Birkett [EMAIL PROTECTED] wrote:
 Hi,

 Is a formal proof that the Haskell language is referentially transparent?
  Many people state haskell is RT without backing up that claim.  I know
 that, in practice, I can't write any counter-examples but that's a bit
 handy-wavy.  Is there a formal proof that, for all possible haskell
 programs, we can replace coreferent expressions without changing the meaning
 of a program?

 (I was writing a blog post about the origins of the phrase 'referentially
 transparent' and it made me think about this)

 Andrew

 --
 - http://www.nobugs.org -
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Thomas Davie


On 12 Nov 2008, at 14:47, Mitchell, Neil wrote:




It's possible that there's some more direct approach that
represents types as some kind of runtime values, but nobody
(to my knowledge) has done that.


It don't think its possible - I tried it and failed.

Consider:

show (f [])

Where f has the semantics of id, but has either the return type  
[Int] or

[Char] - you get different results. Without computing the types
everywhere, I don't see how you can determine the precise type of [].


Surely all this means is that part of the semantics of Haskell is the  
semantics of the type system -- isn't this expected?


Bob
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Ryan Ingram
On Wed, Nov 12, 2008 at 12:35 PM, Lennart Augustsson
[EMAIL PROTECTED] wrote:
 Actually, unsafeInterleaveIO is perfectly fine from a RT point of view.
 Since you are talking to the outside world any behaviour is acceptable.
 All the weird interactions between getContents and writing the same
 file from the same program could, in principle, happen if a different
 program wrote the file.

Yes, I was thinking about this on my way to work and thought that I
may have spoken too soon; I couldn't come up with a way for
unsafeInterleaveIO to break referential transparency (although I
couldn't prove to myself that it couldn't, either).  Your argument
seems good to me, though.

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Lennart Augustsson
You can't write a straightforward dynamic semantics (in, say,
denotational style) for Haskell.
The problem is that with type classes you need to know the types
compute the values.
You could use a two step approach: first make a static semantics that
does type inference/checking and translates Haskell into a different
form that has resolved all overloading.
And, secondly, you can write a dynamic semantics for that language.

But since the semantics has to have the type inference engine inside
it, it's going to be a pain.

It's possible that there's some more direct approach that represents
types as some kind of runtime values, but nobody (to my knowledge) has
done that.

  -- Lennart

On Wed, Nov 12, 2008 at 12:39 PM, Luke Palmer [EMAIL PROTECTED] wrote:
 On Wed, Nov 12, 2008 at 3:21 AM, Jules Bean [EMAIL PROTECTED] wrote:
 Andrew Birkett wrote:

 Hi,

 Is a formal proof that the Haskell language is referentially transparent?
  Many people state haskell is RT without backing up that claim.  I know
 that, in practice, I can't write any counter-examples but that's a bit
 handy-wavy.  Is there a formal proof that, for all possible haskell
 programs, we can replace coreferent expressions without changing the meaning
 of a program?

 The (well, a natural approach to a) formal proof would be to give a formal
 semantics for haskell.

 Haskell 98 does not seem that big to me (it's not teeny, but it's
 nothing like C++), yet we are continually embarrassed about not having
 any formal semantics.  What are the challenges preventing its
 creation?

 Luke
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Andrew Birkett

Edsko de Vries wrote:
See What is a purely functional language by Sabry. Not quite a formal 
proof about *Haskell*, but then we would first need a formal semantics 
of Haskell to be able to do that proof ;-)


Thanks for the reference, and also to everyone who replied -  all very 
useful and interesting.  For what it's worth, the blog posts I was 
writing are here:


http://www.nobugs.org/blog/archives/2008/11/12/why-do-they-call-it-referentially-transparent/
http://www.nobugs.org/blog/archives/2008/11/12/why-do-they-call-it-referentially-transparent-ii/

Andrew
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread David MacIver
On Wed, Nov 12, 2008 at 8:35 PM, Lennart Augustsson
[EMAIL PROTECTED] wrote:
 Actually, unsafeInterleaveIO is perfectly fine from a RT point of view.

Really? It seems easy to create things with it which when passed to
ostensibly pure functions yield different results depending on their
evaluation order:

module Main where

import System.IO.Unsafe
import Data.IORef

main = do w1 - weirdTuple
  print w1
  w2 - weirdTuple
  print $ swap w2

swap (x, y) = (y, x)

weirdTuple :: IO (Int, Int)
weirdTuple = do it - newIORef 1
x - unsafeInterleaveIO $ readIORef it
y - unsafeInterleaveIO $ do writeIORef it 2  return 1
return (x, y)

[EMAIL PROTECTED]:~$ ./Unsafe
(1,1)
(1,2)

So show isn't acting in a referentially transparent way: If the second
part of the tuple were evaluated before the first part it would give a
different answer (as swapping demonstrates).
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Jonathan Cast
On Wed, 2008-11-12 at 22:16 +, David MacIver wrote:
 On Wed, Nov 12, 2008 at 8:35 PM, Lennart Augustsson
 [EMAIL PROTECTED] wrote:
  Actually, unsafeInterleaveIO is perfectly fine from a RT point of view.
 
 Really? It seems easy to create things with it which when passed to
 ostensibly pure functions yield different results depending on their
 evaluation order:
 
 module Main where
 
 import System.IO.Unsafe
 import Data.IORef
 
 main = do w1 - weirdTuple
   print w1
   w2 - weirdTuple
   print $ swap w2
 
 swap (x, y) = (y, x)
 
 weirdTuple :: IO (Int, Int)
 weirdTuple = do it - newIORef 1
 x - unsafeInterleaveIO $ readIORef it
 y - unsafeInterleaveIO $ do writeIORef it 2  return 1
 return (x, y)
 
 [EMAIL PROTECTED]:~$ ./Unsafe
 (1,1)
 (1,2)
 
 So show isn't acting in a referentially transparent way: If the second
 part of the tuple were evaluated before the first part it would give a
 different answer (as swapping demonstrates).

It seems that this argument depends on being able to find a case where
w1 and swap w1 actually behave differently.  weirdTuple is
non-deterministic, but that's fine, since it's in the IO monad.  And w1
and w2 are simply two (distinct!) lambda-bound variables; there is no
requirement that two different lambda-bound variables behave in the same
fashion, regardless of how values may be expected to be supplied for
them at run time (what values the functions in question may be expected
to be applied to) unless the function(s) they are formal parameters of
are (both) applied to the same expression.  (=) certainly does not
count as `application' for present purposes.

Even if it is insisted (why?  I don't think GHC actually guarantees to
produce the above result when main is executed) that main must always
yield the above result, it does not follow that unsafePerformIO is
non-RT; it is still only non-causal.  But referential transparency
doesn't require that the result of an IO action must depend only on
events that transpire by the time it finishes running; it places, in
fact, no requirement on the run-time behavior of any IO action at all.
It requires only that equal expressions be substitutable for equals;
and, again, w1 and w2 being the result of calling a single IO action
with no dependence on the outside world does not require them to be
equal, under any system of semantics.  So, no violation of RT.

jcc


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Don Stewart
david.maciver:
 On Wed, Nov 12, 2008 at 8:35 PM, Lennart Augustsson
 [EMAIL PROTECTED] wrote:
  Actually, unsafeInterleaveIO is perfectly fine from a RT point of view.
 
 Really? It seems easy to create things with it which when passed to
 ostensibly pure functions yield different results depending on their
 evaluation order:
 
 module Main where
 
 import System.IO.Unsafe
 import Data.IORef
 
 main = do w1 - weirdTuple
   print w1
   w2 - weirdTuple
   print $ swap w2
 
 swap (x, y) = (y, x)
 
 weirdTuple :: IO (Int, Int)
 weirdTuple = do it - newIORef 1
 x - unsafeInterleaveIO $ readIORef it
 y - unsafeInterleaveIO $ do writeIORef it 2  return 1
 return (x, y)
 
 [EMAIL PROTECTED]:~$ ./Unsafe
 (1,1)
 (1,2)
 
 So show isn't acting in a referentially transparent way: If the second
 part of the tuple were evaluated before the first part it would give a
 different answer (as swapping demonstrates).

Mmmm? No. Where's the pure function that's now producing different
results?  I only see IO actions at play, which are operating on the
state of the world.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread David MacIver
On Wed, Nov 12, 2008 at 10:46 PM, Don Stewart [EMAIL PROTECTED] wrote:
 david.maciver:
 On Wed, Nov 12, 2008 at 8:35 PM, Lennart Augustsson
 [EMAIL PROTECTED] wrote:
  Actually, unsafeInterleaveIO is perfectly fine from a RT point of view.

 Really? It seems easy to create things with it which when passed to
 ostensibly pure functions yield different results depending on their
 evaluation order:

 module Main where

 import System.IO.Unsafe
 import Data.IORef

 main = do w1 - weirdTuple
   print w1
   w2 - weirdTuple
   print $ swap w2

 swap (x, y) = (y, x)

 weirdTuple :: IO (Int, Int)
 weirdTuple = do it - newIORef 1
 x - unsafeInterleaveIO $ readIORef it
 y - unsafeInterleaveIO $ do writeIORef it 2  return 1
 return (x, y)

 [EMAIL PROTECTED]:~$ ./Unsafe
 (1,1)
 (1,2)

 So show isn't acting in a referentially transparent way: If the second
 part of the tuple were evaluated before the first part it would give a
 different answer (as swapping demonstrates).

 Mmmm? No. Where's the pure function that's now producing different
 results?  I only see IO actions at play, which are operating on the
 state of the world.

I suppose so. The point is that you have a pure function (show) and
the results of evaluating it totally depend on its evaluation order.
But you're still in the IO monad at this point so I suppose it
technically doesn't count because it's only observable as the result
of IO.

It's pretty suspect behaviour, but not actually a failure of
referential transparency.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Jonathan Cast
On Wed, 2008-11-12 at 23:02 +, David MacIver wrote:
 On Wed, Nov 12, 2008 at 10:46 PM, Don Stewart [EMAIL PROTECTED] wrote:
  david.maciver:
  On Wed, Nov 12, 2008 at 8:35 PM, Lennart Augustsson
  [EMAIL PROTECTED] wrote:
   Actually, unsafeInterleaveIO is perfectly fine from a RT point of view.
 
  Really? It seems easy to create things with it which when passed to
  ostensibly pure functions yield different results depending on their
  evaluation order:
 
  module Main where
 
  import System.IO.Unsafe
  import Data.IORef
 
  main = do w1 - weirdTuple
print w1
w2 - weirdTuple
print $ swap w2
 
  swap (x, y) = (y, x)
 
  weirdTuple :: IO (Int, Int)
  weirdTuple = do it - newIORef 1
  x - unsafeInterleaveIO $ readIORef it
  y - unsafeInterleaveIO $ do writeIORef it 2  return 1
  return (x, y)
 
  [EMAIL PROTECTED]:~$ ./Unsafe
  (1,1)
  (1,2)
 
  So show isn't acting in a referentially transparent way: If the second
  part of the tuple were evaluated before the first part it would give a
  different answer (as swapping demonstrates).
 
  Mmmm? No. Where's the pure function that's now producing different
  results?  I only see IO actions at play, which are operating on the
  state of the world.
 
 I suppose so. The point is that you have a pure function (show) and
 the results of evaluating it totally depend on its evaluation order.

Sure.  But only because the argument to it depends on its evaluation
order, as well.

jcc


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread David MacIver
On Wed, Nov 12, 2008 at 11:05 PM, Jonathan Cast
[EMAIL PROTECTED] wrote:
 On Wed, 2008-11-12 at 23:02 +, David MacIver wrote:
 On Wed, Nov 12, 2008 at 10:46 PM, Don Stewart [EMAIL PROTECTED] wrote:
  david.maciver:
  On Wed, Nov 12, 2008 at 8:35 PM, Lennart Augustsson
  [EMAIL PROTECTED] wrote:
   Actually, unsafeInterleaveIO is perfectly fine from a RT point of view.
 
  Really? It seems easy to create things with it which when passed to
  ostensibly pure functions yield different results depending on their
  evaluation order:
 
  module Main where
 
  import System.IO.Unsafe
  import Data.IORef
 
  main = do w1 - weirdTuple
print w1
w2 - weirdTuple
print $ swap w2
 
  swap (x, y) = (y, x)
 
  weirdTuple :: IO (Int, Int)
  weirdTuple = do it - newIORef 1
  x - unsafeInterleaveIO $ readIORef it
  y - unsafeInterleaveIO $ do writeIORef it 2  return 1
  return (x, y)
 
  [EMAIL PROTECTED]:~$ ./Unsafe
  (1,1)
  (1,2)
 
  So show isn't acting in a referentially transparent way: If the second
  part of the tuple were evaluated before the first part it would give a
  different answer (as swapping demonstrates).
 
  Mmmm? No. Where's the pure function that's now producing different
  results?  I only see IO actions at play, which are operating on the
  state of the world.

 I suppose so. The point is that you have a pure function (show) and
 the results of evaluating it totally depend on its evaluation order.

 Sure.  But only because the argument to it depends on its evaluation
 order, as well.

That's not really better. :-)

To put it a different way, in the absence of unsafeInterleaveIO the IO
monad has the property that if f and g are observably equal up to
termination then x = f and x = g are equivalent in the IO monad
(actually this may not be true due to exception handling. Our three
main weapons...). In the presence of unsafeInterleaveIO this property
is lost even though referential transparency is retained.

Anyway, I'll shut up now.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Jonathan Cast
On Wed, 2008-11-12 at 23:18 +, David MacIver wrote:
 On Wed, Nov 12, 2008 at 11:05 PM, Jonathan Cast
 [EMAIL PROTECTED] wrote:
  On Wed, 2008-11-12 at 23:02 +, David MacIver wrote:
  On Wed, Nov 12, 2008 at 10:46 PM, Don Stewart [EMAIL PROTECTED] wrote:
   david.maciver:
   On Wed, Nov 12, 2008 at 8:35 PM, Lennart Augustsson
   [EMAIL PROTECTED] wrote:
Actually, unsafeInterleaveIO is perfectly fine from a RT point of 
view.
  
   Really? It seems easy to create things with it which when passed to
   ostensibly pure functions yield different results depending on their
   evaluation order:
  
   module Main where
  
   import System.IO.Unsafe
   import Data.IORef
  
   main = do w1 - weirdTuple
 print w1
 w2 - weirdTuple
 print $ swap w2
  
   swap (x, y) = (y, x)
  
   weirdTuple :: IO (Int, Int)
   weirdTuple = do it - newIORef 1
   x - unsafeInterleaveIO $ readIORef it
   y - unsafeInterleaveIO $ do writeIORef it 2  return 1
   return (x, y)
  
   [EMAIL PROTECTED]:~$ ./Unsafe
   (1,1)
   (1,2)
  
   So show isn't acting in a referentially transparent way: If the second
   part of the tuple were evaluated before the first part it would give a
   different answer (as swapping demonstrates).
  
   Mmmm? No. Where's the pure function that's now producing different
   results?  I only see IO actions at play, which are operating on the
   state of the world.
 
  I suppose so. The point is that you have a pure function (show) and
  the results of evaluating it totally depend on its evaluation order.
 
  Sure.  But only because the argument to it depends on its evaluation
  order, as well.
 
 That's not really better. :-)

I never said it was.

jcc


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Derek Elkins
On Wed, 2008-11-12 at 23:02 +, David MacIver wrote:
 On Wed, Nov 12, 2008 at 10:46 PM, Don Stewart [EMAIL PROTECTED] wrote:
  david.maciver:
  On Wed, Nov 12, 2008 at 8:35 PM, Lennart Augustsson
  [EMAIL PROTECTED] wrote:
   Actually, unsafeInterleaveIO is perfectly fine from a RT point of view.
 
  Really? It seems easy to create things with it which when passed to
  ostensibly pure functions yield different results depending on their
  evaluation order:
 
  module Main where
 
  import System.IO.Unsafe
  import Data.IORef
 
  main = do w1 - weirdTuple
print w1
w2 - weirdTuple
print $ swap w2
 
  swap (x, y) = (y, x)
 
  weirdTuple :: IO (Int, Int)
  weirdTuple = do it - newIORef 1
  x - unsafeInterleaveIO $ readIORef it
  y - unsafeInterleaveIO $ do writeIORef it 2  return 1
  return (x, y)
 
  [EMAIL PROTECTED]:~$ ./Unsafe
  (1,1)
  (1,2)
 
  So show isn't acting in a referentially transparent way: If the second
  part of the tuple were evaluated before the first part it would give a
  different answer (as swapping demonstrates).
 
  Mmmm? No. Where's the pure function that's now producing different
  results?  I only see IO actions at play, which are operating on the
  state of the world.
 
 I suppose so. The point is that you have a pure function (show) and
 the results of evaluating it totally depend on its evaluation order.
 But you're still in the IO monad at this point so I suppose it
 technically doesn't count because it's only observable as the result
 of IO.
 
 It's pretty suspect behaviour, but not actually a failure of
 referential transparency.

Indeed.  There's a difference between purity and referential
transparency.  A lack of purity is when behaviour, as in semantics,
depends on evaluation order (modulo bottom of course).  Referential
transparency is being able to substitute equals for equals.  These
notions are related but independent.

Examples of failures of referential transparency that aren't failures of
purity are easy to find.  A simple one would be some kind of
introspection feature, such as various forms of quotation, being able to
ask for the name of a variable, being able to serialize functions.  So
purity doesn't imply referential transparency.

Failures of purity that aren't failures of referential transparency are
a bit trickier since without purity (i.e. evaluation order independence)
what constitutes a valid substitution may vary.  Still, as an easy
start, a language with no binding forms is trivially referentially
transparent regardless of how impure it may be.  If you use a
call-by-name evaluation order, the full beta rule holds and evaluation
proceeds by substituting equals for equals and therefore such a language
is also referentially transparent.  So referential transparency doesn't
imply purity.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Dan Doel
On Wednesday 12 November 2008 6:18:38 pm David MacIver wrote:
 To put it a different way, in the absence of unsafeInterleaveIO the IO
 monad has the property that if f and g are observably equal up to
 termination then x = f and x = g are equivalent in the IO monad
 (actually this may not be true due to exception handling. Our three
 main weapons...). In the presence of unsafeInterleaveIO this property
 is lost even though referential transparency is retained.

I'm not sure what exactly you mean by this, but, for instance:

randomIO = (print :: Int - IO ())

obviously isn't going to give the same results as:

randomIO = (print :: Int - IO ())

every time. Your weirdTuple is semantically similar, in that it selects 
between returning (1,1) and (2,1), but instead of being random, it 
operationally selects depending on how you subsequently evaluate the value. 
That may not seem like the same thing, because you know what's going on, but 
formally, I imagine you can treat it all as a black box where either this 
time it gave (2,1) or this time it gave (1,1), and what you see is always 
consistent. The fact that it sort of uses an oracle to see what will happen in 
its future is just The Power of IO (it does have a sort of spooky action 
feel to it). :)

-- Dan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Jonathan Cast
On Wed, 2008-11-12 at 18:47 -0500, Dan Doel wrote:
 On Wednesday 12 November 2008 6:18:38 pm David MacIver wrote:
  To put it a different way, in the absence of unsafeInterleaveIO the IO
  monad has the property that if f and g are observably equal up to
  termination then x = f and x = g are equivalent in the IO monad
  (actually this may not be true due to exception handling. Our three
  main weapons...). In the presence of unsafeInterleaveIO this property
  is lost even though referential transparency is retained.
 
 I'm not sure what exactly you mean by this, but, for instance:
 
 randomIO = (print :: Int - IO ())
 
 obviously isn't going to give the same results as:
 
 randomIO = (print :: Int - IO ())
 
 every time. Your weirdTuple is semantically similar, in that it selects 
 between returning (1,1) and (2,1), but instead of being random, it 
 operationally selects depending on how you subsequently evaluate the value.

I think the point is that randomIO is non-deterministic (technically,
pseudo-random) but causal --- the result is completely determined by
events that precede its completion.  unsafeInterleaveIO, by contrast, is
arguably (sometimes) deterministic --- but is, regardless, definitely
not causal.

jcc


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Lennart Augustsson
But how can you tell something being causal or not?
Assuming that all IO operations return random result, of course. :)

From a practical point of view unsafeInterleaveIO can give suspect results.
But theoretically you can't fault it until you have given a semantics
for the IO monad operations so you have something to compare it
against.

  -- Lennart

On Thu, Nov 13, 2008 at 12:05 AM, Jonathan Cast
[EMAIL PROTECTED] wrote:
 On Wed, 2008-11-12 at 18:47 -0500, Dan Doel wrote:
 On Wednesday 12 November 2008 6:18:38 pm David MacIver wrote:
  To put it a different way, in the absence of unsafeInterleaveIO the IO
  monad has the property that if f and g are observably equal up to
  termination then x = f and x = g are equivalent in the IO monad
  (actually this may not be true due to exception handling. Our three
  main weapons...). In the presence of unsafeInterleaveIO this property
  is lost even though referential transparency is retained.

 I'm not sure what exactly you mean by this, but, for instance:

 randomIO = (print :: Int - IO ())

 obviously isn't going to give the same results as:

 randomIO = (print :: Int - IO ())

 every time. Your weirdTuple is semantically similar, in that it selects
 between returning (1,1) and (2,1), but instead of being random, it
 operationally selects depending on how you subsequently evaluate the value.

 I think the point is that randomIO is non-deterministic (technically,
 pseudo-random) but causal --- the result is completely determined by
 events that precede its completion.  unsafeInterleaveIO, by contrast, is
 arguably (sometimes) deterministic --- but is, regardless, definitely
 not causal.

 jcc


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proof that Haskell is RT

2008-11-12 Thread Ryan Ingram
Interesting posts.  Thanks!

On Wed, Nov 12, 2008 at 2:02 PM, Andrew Birkett [EMAIL PROTECTED] wrote:
 Thanks for the reference, and also to everyone who replied -  all very
 useful and interesting.  For what it's worth, the blog posts I was writing
 are here:

 http://www.nobugs.org/blog/archives/2008/11/12/why-do-they-call-it-referentially-transparent/
 http://www.nobugs.org/blog/archives/2008/11/12/why-do-they-call-it-referentially-transparent-ii/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe