Re: lifting functions to tuples?

2003-11-21 Thread Tim Sweeney
In case this is of interest, there is another novel way that a language
might support liftTup in full generality.  Example code:

 LiftTup(f:function)(a:f.dom,b:f.dom)={f(a),f(b)}
 f(x:int)=x+1

 LiftTup(f)(3,7)
{4,8}

 LiftTup(caps)(Hello,Goodbye)
{HELLO,GOODBYE}

Here, the type system supports full set-theory-style subtyping, with
functions subtyping such that a-b : c-d iff b:d and c:a; an empty type
false containing no elements, and the function type the type of
functions with empty domain and thus a supertype of every function type.
Thus LiftTup above takes a parameter f which may be a function of any
domain and range.

The f.dom notation extracts the (dependent) domain type of the function f,
which depends on the actual function being passed.  Thus the code above is
statically typecheckable, and the program, i.e. LiftTup(caps)(3,7) would
fail, because caps is a function from strings (arrays/tuples of
characters) to strings.

Though, in my language, tuples are merely dependent-typed arrays of known
size, which themselves are a subtype of the type of dependent-typed arrays
of unknown (i.e. existentially quantified) size, which are expressed
syntacticall as, i.e. []:int.  So the above can be rewritten more generally:

 Map(f:function)(a[]:dom.f)=array(i:nata.len)f(a)
 Map(f)(3,7,9,12)
{4,8,10,13}

Where the array notation is an array lambda expression, and a.len extracts
the length of an unknown-sized array.

This language allows quite a bit more generality and type precision in
function and data type definitions, though code tends to lack the
conciseness made possible by Haskell type deduction.

-Tim

- Original Message - 
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED];
[EMAIL PROTECTED]
Sent: Wednesday, November 19, 2003 8:25 PM
Subject: Re: lifting functions to tuples?


 The problem:

 liftTup f (a, b) = (f a, f b)

 of the signature
 liftTup:: ?? - (a,b) - (c,d)

 Again, it is possible to write this in Haskell with common extensions

  {-# OPTIONS -fglasgow-exts #-}

  import Data.Dynamic
  import Data.Maybe

  liftp f (a,b) = ((fromJust . fromDynamic . f . toDyn) a,
   (fromJust . fromDynamic . f . toDyn) b)

 *Main :t liftp
 forall a2 a a3 a1.
 (Typeable a, Typeable a1, Typeable a2, Typeable a3) =
 (Dynamic - Dynamic) - (a1, a3) - (a, a2)


  f1 x | isJust (fromDynamic x::(Maybe Int))
  = let y = fromJust $ fromDynamic x in toDyn $ (toEnum (y + 1)::Char)
  f1 x | isJust (fromDynamic x::(Maybe Float))
  = let y::Float = fromJust $ fromDynamic x in toDyn $ (round(y +
1)::Int)
  f1 x = x

 *Main liftp f1 (65::Int,1.0::Float) :: (Char,Int)
 ('B',2)

  f2 x | isJust (fromDynamic x::(Maybe Bool))
  = let y = fromJust $ fromDynamic x
in toDyn $ ((toEnum (if y then 42 else 65))::Char)
  f2 x | isJust (fromDynamic x::(Maybe ()))
  = let () = fromJust $ fromDynamic x in toDyn $ (2.5::Float)
  f2 x = x

 *Main liftp f2 (True,()) :: (Char,Float)
 ('*',2.5)
 *Main liftp f2 (False,()) :: (Char,Float)
 ('A',2.5)
 *Main liftp f2 (False,1::Int) :: (Char,Int)
 ('A',1)

 As has been discussed on this list on many occasions, Dynamic (and the
 accompanied safe coerce) can be implemented in Haskell98 plus
 existentials and multi-parameter classes with functional dependencies.

 As a matter of fact, translating (a,b) into (c,d) doesn' seem to be
 much different than the generic mapping. I think Strafunsky can
 express such a transformation trivially. Right?

 ___
 Haskell mailing list
 [EMAIL PROTECTED]
 http://www.haskell.org/mailman/listinfo/haskell

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Proposal for a new I/O library design

2003-07-28 Thread Tim Sweeney
Ben,

I live in a different universe, but over here I prefer to represent files
purely as memory-mapped objects.  In this view, there is no difference
between a read-only file and an immutable array of bytes (a byte being a
natural number between 0 to 255).  A read-write file is then equivalant to a
mutable array (or a reference to a mutable array on a heap) of the same.
Treating these all as heap references tends to be cleaner, because you can
compare the references for equality, which is significant even for read-only
files, because two files which contain the same exact data are not
necessarily the same file, whereas opening the same file in two different
places should result in equal references.

This approach greatly simplifies lots of things now that all modern
operating systems can perform file-mapping efficiently with the virtual
memory subsystem paging pieces in and out as necessary.  It gets rid of
pieces of information which are redundent from the low-level file system
point of view (the file handle itself, the current file pointer, etc).

The typical C/Unix approach is to deal with network (TCP or UDP) connections
as streams, too.  Obviously, memory-mapped files aren't a good way of
exposing them -- doing so would require buffering all past data, as well as
blocking when waiting on yet-unreceived data (when you really want to be
able to query whether there is incoming data available).  Instead, I prefer
to conceptualize network connections as a socket / packet-based interface,
with functions to open/close sockets, send a complete packet (being an array
of bytes) to a socket, receive a packet from a socket, and query packet
availability.  With this approach, there is no redundency or missing
information; everything that is observable from the protocol point of view
is an observable in the language interface, and nothing more.

In this manner, it's possible to get rid of all remnants of Unix-like
streams from a language's IO interface.

-Tim

- Original Message -
From: Ben Rudiak-Gould [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Sunday, July 27, 2003 11:35 PM
Subject: Proposal for a new I/O library design


 The other day I was reading the Haskell i18n debate in the list archives,
 and started thinking about possible replacements for the existing Haskell
 file I/O model.

 It occurred to me that the Haskell community has really dropped the ball
 on this one. Haskell's design has always emphasized doing the right thing,
 not merely doing the thing that everyone else happens to be doing. It's
 that philosophy that led to the invention of the monadic I/O model, among
 other things. And yet, what do we choose for our I/O primitives? The same
 old crocks that everyone else was using. We open and close files (whatever
 that's supposed to mean); we expose file handles to the user; we even
 maintain a current position in the file, which is an unnecessary global
 state variable if I've ever seen one.

 The proposal below is the result of a few hours spent thinking about how
 the file system would be accessed if it were actually implemented in
 Haskell, instead of behind a weird C API. I'm very interested in hearing
 comments and criticism. In particular, I want to know if there's enough
 interest in this model that I should actually try to implement it.

 The most important idea in this design as far as i18n is concerned is the
 separation of random-access files from input and output streams. Most of
 the ugliness of the usual file I/O interface comes from conflating these
 three concepts, which are almost totally unrelated. In particular, there's
 no need in this model to worry about the meaning of reading or seeking in
 a text file. Text encoding and decoding apply to streams, not files. To
 read text from a file you layer an input stream on it, apply a text parser
 to that, and read characters. If you need to seek to a new location, you
 create a new stream which starts at that location in the underlying file.


  module System.ProposedNewIOModel (...) where

 I assume that all I/O occurs in terms of octets. I think that this holds
 true of every platform on which Haskell is implemented or is likely to be
 implemented.

  type Octet = Word8

 File offsets are 64 bits on all platforms. This model never uses negative
 offsets, so there's no need for a signed type. (But perhaps it would be
 better to use one anyway?) BlockLength should be something appropriate to
 the architecture's address space.

  type FilePos = Word64
  type BlockLength = Int

 A value of type File represents a file, which is essentially a resizable
 strict array of octets. Two values of type File compare equal if they are
 the same file -- that is, if they have the same contents and changes to
 one also appear in the other.

 (File is a bad name for this. For one thing, NTFS and HFS can associate
 more than one chunk of data with each directory entry, and file usually
 refers to all the chunks together. Fork would be more 

Re: Typesafe MRef with a regular monad

2003-06-05 Thread Tim Sweeney
Conjecture: It's impossible to implement RefMonad directly in Haskell
without making use of built-in ST or IO functionality and without unsafe or
potentially diverging code (such as unsafeCoerce).

Any takers?

If this is true or suspected to be true, any thoughts on whether a structure
besides Monad could encapsulate safe references in Haskell without core
language changes? My intuition is that no such structure exists in Haskell
with power and flexibility equivalant to RefMonad (support for references of
arbitrary types not limited by their context.)

If this is generally thought to be impossible in Haskell, what sort of
language extensions would be needed to make this work safely, meaning
without unsafe coercions?  This seems like a hairy problem.  Yet it gets to
the core question of whether Haskell's monads can really implement
imperative features (such as references) in a purely functional way, or are
just a means of sequentializing those imperative constructs that are built
into the runtime.

Any solutions I can think of require a dependent-typed heap structure, and
that all references be parameterized by the heap in which they exist.

-Tim

- Original Message -
From: Ashley Yakeley [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, June 04, 2003 5:19 PM
Subject: Re: Typesafe MRef with a regular monad


 In article [EMAIL PROTECTED],
  [EMAIL PROTECTED] wrote:

  Ashley Yakeley wrote:
  ] ] Is it possible to actually implement a working instance of RefMonad
in
  ] ] Haskell, without making use of a built-in monad like IO or ST?
 
  ] You certainly wouldn't be able to do this for any monad M which had:
 
  ]   performM :: forall a. M a - a;
 
  ] ...because it wouldn't be type-safe: you'd be able to construct coerce
  ] :: a - b just as you can with unsafePerformIO.
 
  Fortunately, that doesn't seem to be the case.

 That's only because you've failed to do the difficult part: implement
 newRef. Your monadic solution has a statically typed/sized store: I'd
 say it doesn't properly count as a heap since you can't heap new stuff
 on it.

 The original problem was to create an instance of

   class Monad m = RefMonad m r | m - r where
  newRef :: a - m (r a)
  readRef :: r a - m a
  writeRef :: r a - a - m ()

 without making use of IO or ST. Given some M and R that have

   instance RefMonad M R
   performM :: forall a. M a - a

 one can write this:

   coerce :: forall a b. a - b;
   coerce a = let
 {
 ref = performM (newRef Nothing);
 } in performM (do
 {
 writeRef ref (Just a);
 mb - readRef ref;
 case mb of {Just b - return b;};
 });

 --
 Ashley Yakeley, Seattle WA

 ___
 Haskell mailing list
 [EMAIL PROTECTED]
 http://www.haskell.org/mailman/listinfo/haskell


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Safe and sound STRef [Was Implementing RefMonads in Haskell without ST,IO]

2003-06-05 Thread Tim Sweeney
Oleg,

This is a very neat solution to providing coercion-free references in a
local environment.

I'm trying to generalize this to some sort of Monad-like typeclass, where
there is a nice mapping from Monad's to this new and more powerful
typeclass, so that one can combine typed references, IO, etc., into a single
framework.

It seems to me that your approach couldn't be generalized in this way in
Haskell, because the type of the resulting reference-using computation
depends on the precise set of heap operations performed there.  So, for
example, you couldn't do something like:

..
a - newRef Int 123
b - if (some conditional) then (newRef Int) else (a)
..

Because the heap types propagated out of the conditional's two branches
differ.

The only way I can see generalizing your technique to support this sort of
thing requires a type system supporting both dependent types and subtyping.
Think the reference monad as looking somewhat like the state monad; instead
of a single piece of state, it propagates a dependent-typed pair of
(heapTypeFunc,heapValueFunc) similar in spirit to your PList construct, with
the type guaranteeing that any heap operation returns an output heapTypeFunc
that's a contravariant extension of its input heapTypeFunc.

Conjecture: Implementing type-safe (coercion-free), composable computations
over typed references isn't possible in Haskell.  By composable I mean that
some operator similar in spirit to = on Monads can be implemented and that
computations with differing effects can occur in if-branches.

But then again, I had not thought the problem you solved using PList to be
solveable in Haskell, and am very eager to be proven wrong!

-Tim

- Original Message -
Back in September 2001, Koen Claessen wrote:
] Here is a little experiment I did a while ago. I was trying to isolate
] the capability of the ST monad to deal with different types at the
] same time  I conjecture this functionality cannot be implemented
] in Haskell 98, nor in any of the known safe extensions of Haskell.

Recently, Tim Sweeney wrote
] Is it possible to actually implement a working instance of RefMonad in
] Haskell, without making use of a built-in monad like IO or ST?

The following code shows a safe and sound implementation of a
polymorphic heap with references and updates. The heap is capable of
storing of polymorphic, functional and IO values. All operations are
*statically* checked. An attempt to alter a heap reference with a
value of a mismatched type leads to a _compile-time_ error. Everything
is implemented in Haskell98 + multiparameter classes with functional
dependencies + overlapping instances. I suspect that the latter isn't
strictly needed, but it's almost midnight. Most importantly, no IO or
ST monad, no unsafePerformIO or unsafeCoerce, no existential types and
no Dynamics are needed. It seems that the polymorphic updateable heap
can be implemented safely. Although the code looks like a monadic
code, the Monad class doesn't seem to be polymorphic enough to wrap
our heap. Perhaps arrows will help.

I'd like to remark first that the ST monad with polymorphic STRef can
be implemented in Haskell98 + safe extensions _provided_ all the
values question are in the class Show/Read. When we store values,
we store their external representation. When we retrieve a value, we
read it. Similarly for values in the Binary class. All such values are
_safely_ coercible. The following code however does not make this
assumption. In our heap below, we can even store polymorphic
functions and IO values!


First, the tags for values in our heap

 data Zero
 data Succ a

 class HeapTag a where
 tag_value:: a - Int

 instance HeapTag Zero where
 tag_value _ = 0
   -- I just can't avoid the undefined arithmetics
 instance (HeapTag t) = HeapTag (Succ t) where
 tag_value _ = 1 + tag_value (undefined::t)

The heap reference is the combination of the tag and the desired
type. As we will see, the value of the second argument of the HeapRef
doesn't actually matter -- only its type does.

 data HeapRef t a = HeapRef t a

Our heap is implemented as a polymorphic associative list

 data Nil t v r  = Nil
 data Cons t v r = Cons t v r

 class PList ntype ttype vtype cdrtype where
 cdr::   ntype ttype vtype cdrtype - cdrtype
 empty:: ntype ttype vtype cdrtype - Bool
 value:: ntype ttype vtype cdrtype - vtype
 tag::   ntype ttype vtype cdrtype - ttype

 instance PList Nil ttype vtype cdrtype where
 empty = const True

 instance (PList n t v r,HeapTag tag) = PList Cons tag vtype (n t v r)
where
  empty = const False
  value (Cons t v r) = v
  tag   (Cons t v r) = t
  cdr   (Cons t v r) = r


The following is the statically typed polymorphic heap itself:

 class Heap t v h | t h - v where
 fetch:: (HeapRef t v) - h - v
 alter:: (HeapRef t v) - v - h - h

 instance (HeapTag t,PList Cons t v r) = Heap t v (Cons t v r) where
 fetch _ p

Implementing RefMonads in Haskell without ST,IO

2003-05-30 Thread Tim Sweeney



Given the common definition of 
RefMonad:

class Monad m = RefMonad m r | m - r 
where
 newRef :: a - m (r 
a)
 readRef :: r a - m 
a
 writeRef :: r a - a - m 
()

Is it possible to actually implement a working 
instance of RefMonad in Haskell, without making use of a built-in monad like IO 
or ST? If so, I'd love any tips -- I've been making good use of monads for 
a while, but can't figure this one out.

The Java programmer in me wants to implement 
RefMonad by passing around a function from integers (think of them as pointers 
or heap indices) to "objects", and in readRef, "cast" the "object" to the 
appropriate type "t". 

If it's not possible to implement a typesafe 
RefMonad instance directly in Haskell, without making use of built-in imperative 
features like IO, then doesn't this refute the claims that monads implement 
imperative features functionally?

-Tim


Re: Implementing RefMonads in Haskell without ST,IO

2003-05-30 Thread Tim Sweeney
Hi Derek,

How can one implement RefMonad below directly in Haskell?

class Monad m = RefMonad m r | m - r where
newRef :: a - m (r a)
readRef :: r a - m a
writeRef :: r a - a - m ()

I've been able to implement a monad that encapsulates references to
integers, by creating a monad that passes the heap (a function int-int,
from heap indices to integer values stored in the heap) in and out of
functions.  That was pretty straightforward, and the monad makes everything
look like an imperative language that supports updatable references to
integers (and only integers).

But how can one implement RefMonad to support references of all possible
types simultaneously?

The problem is that readRef needs to return elements of an arbitrary type,
so it has to be able to extract them from some sort of heap function or
data structure.  But what does the heap data structure look like?  The
values that readRef :: r a - m a depend on type a, and I can't figure
out how to implement that, because a is local to that declaration.

If this were implemented in C++, for example, the heap could just store
objects, and readRef could cast the object to the appropriate type when
returning it.  But in Haskell, I don't see any way to do this.

-Tim

- Original Message -
From: Derek Elkins [EMAIL PROTECTED]
To: Tim Sweeney [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Thursday, May 29, 2003 10:31 PM
Subject: Re: Implementing RefMonads in Haskell without ST,IO


 On Thu, 29 May 2003 22:48:05 -0500
 Tim Sweeney [EMAIL PROTECTED] wrote:

  If it's not possible to implement a typesafe RefMonad instance
  directly in Haskell, without making use of built-in imperative
  features like IO, then doesn't this refute the claims that monads
  implement imperative features functionally?
 
  -Tim

 You certainly can have an instance of RefMonad that -simulates-
 updateable references. You can't implement this with update inplace
 without an update inplace primitive and if Haskell had an update inplace
 primitive that you could use anywhere it wouldn't be a pure language.
 Monads in Haskell -allow- imperative features and use of imperative
 features without breaking the purity of the entire language.  Outside a
 monadic computation equational reasoning always holds, even when talking
 about imperative actions. That monads implement an imperative effect
 doesn't mean they implement it the same way an imperative language
 would.  Of course, the actual implementation doesn't matter so
 implementing it imperatively is just as good as implementing it
 functionally as far as static semantics go, which is why everything
 (IOST) works out.

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


class MonadPlus m = MonadRing m

2003-05-30 Thread Tim Sweeney
class MonadPlus m = MonadRing m where
mone :: m a
** :: m a - m a - m a

Does anybody have experience with a construct like MonadRing above?  I'm
interested in prior implementations, examples of instances, and coherence
rules.

A conceptual example: consider ListMonad, used to implement
non-determinism, or computations that may produce multiple values.  In
this context, the order or duplication of elements in a list is not
significant; we think of them as sets of values:

From Functor:
fmap f xs -- apply a function to all elements in the set, and collect
the results in a set

From Monad:
xs=f -- apply a set-returning function to all elements of x, and merge
all the results into a set
return xs -- generate a singleton set containing only x

From MonadPlus:
mzero -- the empty set
xs++ys -- the union of sets xs and ys

From MonadRing:
xs ** ys -- the intersection of the sets xs and ys.
mone -- the unit of **. conceptually, the set of all sets. With
Haskell's type system, this notion isn't feasible with ListMonad directly,
but could be used with similar set-like monads containing elements drawn
from a finite set.

For set-like monads, there is then a further notion of a partial order
relation defined by set inclusion.  This could be defined along the lines of
mcontains :: m a - m b - m bool.  For ListMonad, mcontains xs ys can be
defined as true iff xs contains all elements of ys, in order, allowing
arbitrary new elements that happen to occur between them.

I'm not sure of the practical utility of these constructs in the Haskell
world, but I've been working with them for a while in an experimental
language and have found them very powerful, especially when coupled with
first-class types with set-like functionality a la David McAllester Ontic
language.  In my particular case, I have found the following syntactic sugar
to aid a great deal in the readability of complex monad expressions:

-- Non-monad constructs:

f(x) -- call function f with parameter x
let x=v in e -- let-binding.

-- Functor syntactic sugar:

f($x) -- like map f x
let x$=v in e -- like map (\x - e) x

-- Monad syntactic sugar:

f(%x) -- like x = f
let x%=v in e -- like v = \x - e

This could be considered a syntactic alternative to the do notation and
list comprehension notation.

-Tim Sweeney

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Tetration operator in functional programming

2000-08-26 Thread Tim Sweeney

It should reflect arithmetic laws of tetration, which for example
relate (b ^^ (a1*a2)) with b^^a1 and b^^a2 or so, but are there any?
So how should (a,b)--t be defined?

I couldn't find any papers on this topic, so I'm trying to work out the laws
now.  The interesting ones are (a*b)^^c and (a^b)^^n.  They don't seem to be
as "obvious" as the rules for exponents and products, though.

There is also a notion of "dependent tetration" which is even more
mysterious.  Just as Sigma(a:t).b corresponds to "dependent sum" (with types
or numbers) and Pi(a:t).b corresponds to "dependent product", there is a
Tetra(a:t).b corresponding to "dependent tetration".

This operation is mysterious because exponentiation, the operator upon which
tetration is defined, is the first operator in the Ackermann hierarchy which
is not symmetric.  So we can't just carelessly say that Tetra(a:t).f(a) =
((...-f(e2))-f(e2) where e1,e2,... are the elements of type t.  The result
of Tetra(a:t).f(a) depends on the order in which we choose the elements of
t.  Which begs the question: *which* order is appropriate?

Perhaps our notion of dependent products and dependent functions is
currently oversimplified, since we are neglecting this hidden "ordering"
which is implicitly involved, but can neatly be hidden because sums and
products are symmetric, up to isomorphism, i.e. a+b~=b+a and a*b~=b*a.  I
wonder, are there useful types (in a type theory) for which sum and product
are not so neatly symmetric?  One such construction is to apply the rules of
simple matrix algebra, but put types inside the matrices rather than
numbers.  The resulting types commute: (a*b)*c~=a*(b*c) but we don't have
a*b~=b*a, since the "shape" of the resulting matrix depends on the ordered
dimensions of the inputs.

-Tim





Higher-order function application

2000-08-23 Thread Tim Sweeney

In Haskell, only a single notion of "function application" exists, where a
function f::t-u is passed a parameter of type t, returning a result of type
u.  Example function calls are:

1+2
sin 3.14
map sin [1:2:3]

However, a higher-order notion of function application seems sensible in
many cases.  For example, consider the following expressions, which Haskell
rejects, despite an "obvious" programmer intent:

cos+sin-- intent: \x-((cos x)+(sin x))
cos(sin)   -- intent: \x-cos(sin(x))
(cos,sin)(1,2) -- intent: cos(1),sin(2)
(+)(1,2)   -- intent: (+)(1)(2)
cos [1:2:3]-- intent: map cos [1:2:3]

From this intuition, let's postulate that it's possible for a compiler to
automatically accept such expressions by translating them to the more
verbose "intent" listed above, using rules such as:

1. Operator calls like (+) over functions translate to lambda abstractions
as in the "cos+sin" example.

2. A pair of functions f::t-u, g::v-w acts as a single function from pairs
to pairs, (f,g)::(t,u)-(v,w).

3. Translating function calling into function composition, like "cos(sin)".

4. Automatic currying when a pair is passed to a curried function.

5. Automatic uncurrying when a function expecting a parameter of type (t,u)
is passed a single value of type t.

6. Applying a function f:t-u to a list x::[t] translates to "map f x".

I wonder, are these rules self-consistent?  Are they unambiguous in all
cases?  Are there other rules we can safely add?

It also seems that every statement above is simply a new axiom at the type
checker's disposal.  For example, to describe the general notion of
"cos+sin" meaning "\x-(cos(x)+sin(x))", we say:

for all types t,u,v,
for all functions f,g :: t-u,
for all functions h ::u-u-v,
h (f,g) = \x-h(f(x),g(x)).

Is this "higher order function application" a useful notion, and does any
research exist on the topic?

-Tim