Re: [Haskell-cafe] Re: Top Level TWI's again was Re: [Haskell] Re: Parameterized Show

2004-11-23 Thread Keean Schupke
Can a C function be pure? I guess it can... The trouble is you cannot 
proove its
pure?

But - why would you want to use a pure C function. The chances of any useful
C library function being pure are slim - and the performance of GHC in some
of the benchmarks shows that there is hardly any speed advantage (for a pure
function)...
   Keean.
Benjamin Franksen wrote:
On Monday 22 November 2004 23:22, Keean Schupke wrote:
 

It seems to me that as unsafePerformIO is not in the standard and only
implemented on some
compilers/interpreters, that you limit the portability of code by using
it, and that it is best avoided. Also as any safe use of unsafePerformIO
can be refactored to not use it I could
certainly live without it.
   

With one exception: If a foreign function (e.g. from a C library) is really 
pure, then I see no way to tell that to the compiler other than using 
unsafePerformIO. IIRC, unsafePerformIO is in the standard FFI libraries.

Ben
 

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problem with overlapping class instances

2004-11-23 Thread Graham Klyne
At 21:40 22/11/04 +0100, Ralf Laemmel wrote:
Instance selection and thereby overlapping resolution
is *independent* of constraints. It is defined to be purely
syntactical in terms of instance heads. See the HList paper
for some weird examples.
That explains it.  Thanks!
#g
--

Ralf
Graham Klyne wrote:
The reported overlapping instance is [Char], which I take to be derived 
from the type constructor [] applied to type Char, this yielding a form 
that matches (cw c).  But the instance ConceptExpr (cw c) is declared to 
be dependent on the context ConceptWrapper cw c, which has *not* been 
declared for the type constructor [].

GHCi with -fglasgow-exts is no more informative.
What am I missing here?


Graham Klyne
For email:
http://www.ninebynine.org/#Contact
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Top Level TWI's again was Re: [Haskell] Re: Parameterized Show

2004-11-23 Thread Graham Klyne
I think this is a useful debate, because it touches on how Haskell meets 
real-world programming needs, so I shall continue in that spirit...

At 22:22 22/11/04 +, Keean Schupke wrote:
Obviously without knowing the details I am speculating, but would it not 
be possible
to do a first pass of the XML and build a list of files to read (a pure 
function) this returns
its result to the IO monad where the files are read and concatenated 
together, and passed
to a second (pure functional) processing function. If written well this 
can take advantage
of lazy execution, so both functions end up running concurrently.
In an ideal world, it is certainly possible to separate the pure and 
non-pure aspects of the code, and do something like you suggest.  But my 
position was that I was working with an existing codebase (HaXml) which had 
not been structured with this requirement in mind, and I absolutely did not 
want to start from scratch (as it was, I was forced into some substantial 
refactoring).  This was one case where, in order to get any result at all 
with the time/effort available to me, I needed to hide the I/OI within an 
otherwise pure function.

Yes, there are better ways but, being a Bear of Very Little Brain, I have 
to work with the tools, intellectual and otherwise, that are at my 
disposal.  Most software is not built in the optimum fashion, or even 
anything close to it.  I would suggest that one of the challenges for 
functional programming is to maker it easy to do the right thing.  I came 
to functional programming with quite a strong bias to make it work for me, 
inspired many years ago by John Backus' famous paper, and a presentation by 
David Turner about KRC, and a few other things.  Many programmers I've 
spoken to who have tried functional programing have given up on it because 
it's too hard.

It seems to me that as unsafePerformIO is not in the standard and only 
implemented on some
compilers/interpreters, that you limit the portability of code by using 
it, and that it is best avoided. Also as any safe use of unsafePerformIO 
can be refactored to not use it I could
certainly live without it.
Well, I am concerned about portability.  I insist on using Hugs when many 
people use just GHC, and one of the reasons is that I don't want to get 
locked into one compiler's extensions.  But sometimes it is necessary to 
use extensions:  there are many features of Haskell-98++ that are almost 
essential (IMO) to practical software development.  Including, I think, 
unsafePerformIO (on rare occasions).  My touchstone is that I'll use 
language extensions when I have to, provided they are supported by both 
Hugs and GHC.

What's my point in all this?  I supposed it might be summed up as: The 
best is the enemy of the good.

#g
--
Graham Klyne wrote:
[Switching to Haskell-cafe]
I have used it once, with reservations, but at the time I didn't have the 
time/energy to find a better solution.  (The occasion of its use was 
accessing external entities within an XML parser;  by making the 
assumption that the external entities do not change within any context in 
which results from a program are compared, I was able to satisfy the 
proof obligation of not causing or being sensitive to side effects.)

The reason this was important to me is that I wanted to be able to use 
the parser from code that was not visibly in the IO monad.  For me, 
treating Web data transformations as pure functions is one of the 
attractions of using Haskell.

(Since doing that, I had an idea that I might be able to parameterize the 
entity processing code on some Monad, and use either an Identity monad or 
IO depending on the actual requirements.  This way, I could keep pure XML 
processing out of the IO monad, but use IO when IO was needed.)

In short:  I think it's usually possible to avoid using unsafePerformIO, 
but I'd be reluctant to cede it altogether, if only for sometimes 
quick-and-dirty pragmatic reasons.

#g

Graham Klyne
For email:
http://www.ninebynine.org/#Contact

Graham Klyne
For email:
http://www.ninebynine.org/#Contact
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Haskell-cafe message-length restriction

2004-11-23 Thread Graham Klyne
To the list Haskell-cafe admin...
May I suggest that the maximum message length for postings to the 
haskell-cafe list without moderation be raised from its current 5K 
limit?   It seems to me that a value of (say) 20K would reduce the 
moderator's workload without obviously allowing too many undesirables ... 
(though I allow the moderator surely knows more about what is thrown at the 
list that most of us don't see.)

Just a thought.
#g

Graham Klyne
For email:
http://www.ninebynine.org/#Contact
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Top Level TWI's again was Re: [Haskell] Re: Parameterized Show

2004-11-23 Thread Keean Schupke
Off topic, but interesting, Someone else keeps quoting this at me... I 
prefer Knuth - paraphrased as I cant remember the quote - The best 
software projects are the ones where the source code has been lost about 
half way through the development and started from scratch.

The point is programmers start by exploring a problem space without 
understanding it. Poor programmers just accept the first solution they 
put down. Good programmers re-implement. Great programmers have a sixth 
sense of when things are about to get ugly, and start again (and the 
better you are the less you actually have to implement before you 
realise things can be refactored for the better)...

Graham Klyne wrote:
What's my point in all this?  I supposed it might be summed up as: 
The best is the enemy of the good.

#g
--

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Top Level TWI's again was Re: [Haskell] Re: Parameterized Show

2004-11-23 Thread Glynn Clements

Keean Schupke wrote:

 Can a C function be pure? I guess it can... The trouble is you cannot 
 proove its
 pure?
 
 But - why would you want to use a pure C function.

Because it already exists? E.g. most BLAS/LAPACK functions are pure;
should they be re-written in Haskell?

[Yes, I know that BLAS/LAPACK are written in Fortran, but I don't
think that changes the argument. The resulting object code (which is
what you would actually be using) wouldn't be significantly different
if they were written in C.]

-- 
Glynn Clements [EMAIL PROTECTED]
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Top Level TWI's again was Re: [Haskell] Re: Parameterized Show

2004-11-23 Thread Conor McBride
Keean Schupke wrote:
Can a C function be pure? I guess it can... The trouble is you cannot 
proove its
pure?
A C function might have no observable side effects, even if it operates
destructively over its own private data structures. It mightn't be too
hard to establish a sound test for this sort of purity (the one we have
already is sound; it always says no; some improvement may be possible).
Clearly completeness is too much to hope for.
But - why would you want to use a pure C function. The chances of any 
useful C library function being pure are slim - and the performance of
 GHC in some of the benchmarks shows that there is hardly any speed
 advantage (for a pure function)...
What about the other benchmarks? There are plenty of operations where
programmers can do a neater job than compilers at deciding that a given
data structure is known only to one consumer and can therefore be
manipulated destructively, recycled aggressively etc. I know modern
recycling is marvellous, but reduced consumption is better, isn't it?
The C functions I'm thinking of are the output from Hofmann co's
LFPL compiler: pure *linear* functional programs which run in the heap
they were born with. There are potential speed gains too: the knowledge
that you don't need to keep the original input means that you can
operate deep inside it in constant time, at the cost of maintaining
some extra pointers. (Does anybody know of a linear type system which
allows this? Basically, a list xs contains a pointer to its tail, so
holding a tail-pointer for xs would be a duplicate reference: problem.
But perhaps it's ok for the holder of xs also to hold its tail-pointer.)
This stuff isn't really my thing, but I'm an interested spectator.
These programs aren't funny interactive hard-drive-formatting things,
so they're probably irrelevant to this particular argument. Nonetheless,
they're hard to write efficiently in functional programming languages as
we know them. They're hard to write safely in C, but sometimes we just
get fed up with knowing useful stuff that we can't tell the compiler.
Is uniqueness worth a second look?
Conor
--
http://www.cs.rhul.ac.uk/~conor  for one more week
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Top Level TWI's again was Re: [Haskell] Re: Parameterized Show

2004-11-23 Thread Keean Schupke
Have you looked at Linear Aliasing, the type system used for TAL (typed 
assembly
language)... one would assume that if a C compiler which compiles to TAL 
were
produces, then you could proove purity?

Keean.
Conor McBride wrote:
Keean Schupke wrote:
Can a C function be pure? I guess it can... The trouble is you cannot 
proove its
pure?

A C function might have no observable side effects, even if it operates
destructively over its own private data structures. It mightn't be too
hard to establish a sound test for this sort of purity (the one we have
already is sound; it always says no; some improvement may be possible).
Clearly completeness is too much to hope for.
But - why would you want to use a pure C function. The chances of any 
useful C library function being pure are slim - and the performance of
 GHC in some of the benchmarks shows that there is hardly any speed
 advantage (for a pure function)...
What about the other benchmarks? There are plenty of operations where
programmers can do a neater job than compilers at deciding that a given
data structure is known only to one consumer and can therefore be
manipulated destructively, recycled aggressively etc. I know modern
recycling is marvellous, but reduced consumption is better, isn't it?
The C functions I'm thinking of are the output from Hofmann co's
LFPL compiler: pure *linear* functional programs which run in the heap
they were born with. There are potential speed gains too: the knowledge
that you don't need to keep the original input means that you can
operate deep inside it in constant time, at the cost of maintaining
some extra pointers. (Does anybody know of a linear type system which
allows this? Basically, a list xs contains a pointer to its tail, so
holding a tail-pointer for xs would be a duplicate reference: problem.
But perhaps it's ok for the holder of xs also to hold its tail-pointer.)
This stuff isn't really my thing, but I'm an interested spectator.
These programs aren't funny interactive hard-drive-formatting things,
so they're probably irrelevant to this particular argument. Nonetheless,
they're hard to write efficiently in functional programming languages as
we know them. They're hard to write safely in C, but sometimes we just
get fed up with knowing useful stuff that we can't tell the compiler.
Is uniqueness worth a second look?
Conor
--
http://www.cs.rhul.ac.uk/~conor  for one more week

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Top Level TWI's again was Re: [Haskell] Re: Parameterized Show

2004-11-23 Thread Keean Schupke
Glynn Clements wrote:
I thought these libraries did have some global state, like choosing
which solver is used... In which case treating them as pure could
be dangerous...
   Keean.
Keean Schupke wrote:
 

Can a C function be pure? I guess it can... The trouble is you cannot 
proove its
pure?

But - why would you want to use a pure C function.
   

Because it already exists? E.g. most BLAS/LAPACK functions are pure;
should they be re-written in Haskell?
[Yes, I know that BLAS/LAPACK are written in Fortran, but I don't
think that changes the argument. The resulting object code (which is
what you would actually be using) wouldn't be significantly different
if they were written in C.]
 

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Top Level TWI's again was Re: [Haskell] Re: Parameterized Show

2004-11-23 Thread Benjamin Franksen
On Tuesday 23 November 2004 10:03, you wrote:
 But - why would you want to use a pure C function. The chances of any
 useful C library function being pure are slim - and the performance of GHC
 in some of the benchmarks shows that there is hardly any speed advantage

The typical case (for me) is a foreign library exporting mostly non-pure 
routines, but with one or two pure functions among them.

But as has been stated already, unsafePerformIO is not needed in this case.

BTW, if you reply to the list anyway, don't reply to me in person. Otherwise I 
get everything duplicated (and it goes onto the wrong folder ;-).

Ben
-- 
Top level things with identity are evil.-- Lennart Augustsson
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Top Level TWI's again was Re: [Haskell] Re: Parameterized Show

2004-11-23 Thread David Roundy
On Mon, Nov 22, 2004 at 08:32:33PM +, Graham Klyne wrote:
 [Switching to Haskell-cafe]
 
 At 11:26 22/11/04 +, you wrote:
 I would ask an alternative question - is it possible to live without
 unsafePerformIO? I have never needed to use it!

There are plenty of non-IO reasons to use unsafePerformIO, for which it is
essential.  If you want to write haskell code that uses a pointer
(allocated possibly via an FFI C routine), it has to be in the IO monad.
If you know that this pointer doesn't access memory that'll be changed at
random (or by other routines), you can (and *should*) safely use
unsafePerformIO.

Also, if you're interested in using weak pointers (for example, to do
memoization), you'll almost certainly need to use unsafePerformIO.  Again,
the result can, and should, be encapsulated, so the module that uses
unsafePerformIO exports only pure functions (unless of course, there are
any that actually perform IO).
-- 
David Roundy
http://www.darcs.net
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Top Level TWI's again was Re: [Haskell] Re: Parameterized Show

2004-11-23 Thread Keean Schupke
David Roundy wrote:
There are plenty of non-IO reasons to use unsafePerformIO, for which it is
essential.  If you want to write haskell code that uses a pointer
(allocated possibly via an FFI C routine), it has to be in the IO monad.
If you know that this pointer doesn't access memory that'll be changed at
random (or by other routines), you can (and *should*) safely use
unsafePerformIO.
 

Does it? cant you just declare:
import foreign ccall somefn somefn :: Ptr Double - Ptr Double
Also, if you're interested in using weak pointers (for example, to do
memoization), you'll almost certainly need to use unsafePerformIO.  Again,
the result can, and should, be encapsulated, so the module that uses
unsafePerformIO exports only pure functions (unless of course, there are
any that actually perform IO).
 

Don't know about this one, got a short example?
   Keean.
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Mutable and persistent values (was: Top Level TWI's again)

2004-11-23 Thread Graham Klyne
[I'm moving my response to the Haskell-cafe mailing list, as it seems more 
appropriate for this kind of discussion.]

Your question seems to touch on the old chestnut of how to reconcile pure 
functional programs with the inherently procedural aspects of I/O and state 
management.  Most of the richness to which you refer is, to my mind, a 
fairly esoteric corner of Haskell with which I've had little cause to 
engage.  I sense you may be coming from a perspective similar to mine of a 
couple of years ago, hence this response.  Maybe what follows is already 
well known to you (some of it is basic stuff), so please accept my 
apologies if I'm retreading well-worn paths for you.

The classical work (c. 1970's) on using functions to describe programs that 
update machine state models such programs as functions that take a state 
and return a new state, maybe returning some other values as well.  This 
could get complicated to handle, and along the way functional structures 
based on monads were invoked to tidy up the housekeeping.  (A paper by 
Simon Peyton-Jones and John Launchbury [1] about handling state in Haskell 
was useful for me.)  Note the central role of higher order functions in 
dealing with this:  a state-monad function represents a computation on some 
state, not the result of that state;  the result of the computation is 
available only when the function is applied to some initial state.  The 
do-notation is a syntactic sugar to make this easier to write, but (I 
think) it can sometimes obscure what is going on.  Philip Wadler's paper 
The Essence of Functional Programming [3] nicely illustrates how monads 
can take care of various housekeeping chores.

The IO monad can be viewed as a special case that allows a program to 
access and update state (the real world) that exists outside the program.

So, turning to your specific questions/concerns:
Also, ultimately, I want to be able to save  my work and restart
the next day (say) picking up the tags where I left off.
Your top-level program must be in the IO monad (or: is an expression that 
describes an IO monad value, which is a computation that interacts with the 
real world).  Thus, it can access yesterdays work, and also your 
interactive inputs, and compute a new value that is today's work.  The IO 
monad allows you to sequence the various interactions with persistent state 
and user dialogue, in a purely functional way.

I'm darned if I can see how to do this in a callback without a global
variables (and references from other callbacks, by the way).
I would expect the callback function to accept an initial value of the 
global variable(s), and return its new value(s), leaving the calling site 
to ensure that the new value is used as required.  (This interaction may be 
hidden in a monad (e.g. a state monad), and if the callback itself 
involves interaction with a user must use the IO monad.)

   I want to use the old value of a tag to compute the new value, in a
   callback,
   I want to access the tag from other callbacks, and
   I want to the value to a mutable list from within the callback.
All of these are accommodated by adopting the 
accept-a-value-and-return-a-new-value perspective.  The calling program 
that invokes the callbacks deals with ensuring that the callbacks have 
access to the appropriate results from other callbacks.

It is my view that use a functional language effectively, it is necessary 
to think differently about its structure than one would for a conventional 
imperative program.  Global values become parameters and return 
values;  mutable values become input values and new (separate) return 
values.  If you're used to writing imperative programs that do not use 
global state, but rather use parameters to pass values, the difference 
isn't always quite as great as might be imagined.

For complex values, this sounds horribly inefficient, but because values 
are pure (not mutable), very high levels of sharing can (sometimes, with 
good design!) be extensively shared.  Thus, a return value may well consist 
mostly of a copy of the input value, with differences, rather than actually 
being a complete new copy.  (Chris Okasaki's book shows well how to design 
to obtain these benefits.)

Also, because Haskell is lazy, complex structures passed between functions 
don't necessarily ever exist in their entirety.  Sometimes, I think of a 
data structure as describing a computational traversal through some data, 
rathert than something that actually exists.  (This leads to very elegant 
expression of things like Prolog search-and-backtrack algorithms.)

So the main point of this posting is to say that I think that the 
richness of implicit parameters, linear implicit parameters, 
unsafePerformIO, ... to which you refer is a shoal of red herring, as far 
as your goals are concerned.

I hope this helps.  I've added below a couple of links to my web site to 
stuff that I accumulated while learning Haskell over the past couple of 

Re: [Haskell-cafe] Re: Top Level TWI's again was Re: [Haskell] Re: Parameterized Show

2004-11-23 Thread David Roundy
On Tue, Nov 23, 2004 at 01:51:24PM +, Keean Schupke wrote:
 David Roundy wrote:
 
 There are plenty of non-IO reasons to use unsafePerformIO, for which it is
 essential.  If you want to write haskell code that uses a pointer
 (allocated possibly via an FFI C routine), it has to be in the IO monad.
 If you know that this pointer doesn't access memory that'll be changed at
 random (or by other routines), you can (and *should*) safely use
 unsafePerformIO.
 
 Does it? cant you just declare:
 
 import foreign ccall somefn somefn :: Ptr Double - Ptr Double

Right, but if you want to access the contents of that pointer in haskell,
you have to use the IO monad.  True, in principle you could write a pointer
dereferencing function in C:

import foreign ccall readarray readarray :: Ptr Double - Int - Double

but that hardly seems like either an efficient or elegant way of getting
around the fact that haskell can't read pointers outside the IO monad.
Also, of course, this readarray function written in C is no safer than
using unsafePerformIO with peekArray.

In case you're wondering, peekArray needs to be in the IO monad because
there's no guarantee that the memory pointed to by the Ptr is constant--it
may even be a pointer to an mmapped file, in which case it could change
value independently of the program's execution.

 Also, if you're interested in using weak pointers (for example, to do
 memoization), you'll almost certainly need to use unsafePerformIO.  Again,
 the result can, and should, be encapsulated, so the module that uses
 unsafePerformIO exports only pure functions (unless of course, there are
 any that actually perform IO).

 Don't know about this one, got a short example?

I have a long and complicated example...

http://abridgegame.org/cgi-bin/darcs.cgi/darcs/AntiMemo.lhs?c=annotate

This is complicated because it's doing antimemoization rather than
memoization, and being backwards it's a bit trickier.  But it *is* an
example of a module that exports only pure functions, and couldn't be
written without unsafePerformIO (and does no IO).
-- 
David Roundy
http://www.darcs.net
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Global variables again

2004-11-23 Thread Benjamin Franksen
[we should really keep this on haskell-cafe because such lengthy discussions 
are what the cafe is for]

On Tuesday 23 November 2004 10:26, Adrian Hey wrote:
 On Monday 22 Nov 2004 4:03 pm, Benjamin Franksen wrote:
  This is getting ridiculous. At least two workable alternatives have been
  presented:
 
  - C wrapper (especially if your library is doing FFI anyway)
  - OS named semaphores

 Neither of these alternatives is a workable general solution.

Since the problem only appears in special situations, a general solution is 
not required, nor is it desirable (because of the danger of infection with 
the global variable disease.)

 There are several significant problems with both, but by far
 the most significant problem (at least if you believe that top
 level mutable state is evil) is that they both rely on the use
 of top level mutable state. If this is evil it is surely just as
 evil in C or OS supplied resources as it is in Haskell.

The evil is in the world in the form of C libraries with hidden global 
variables and hardware with non-readable registers.

What I am arguing for is to *contain* this disease by forcing a solution to 
happen outside of Haskell.

What you are arguing for (i.e. a general solution *in* Haskell) amounts to 
(deliberately) *spreading* the disease.

 The fact that one solution requires the use of a completely different
 programming language 

And that is exactly the point: In order to do evil you have to go somewhere 
else, preferably to where the problem originally came from. If it originated 
in C, go fix it on the C level. If originates in the OS, go fix it on the OS 
level. As for broken hardware, you should use a OS level mechanism, so that 
multiple initialization is prevented OS wide and not only per program run.

 and the other requires the use of a library which 
 could not be implemented in Haskell (not without using unsafePerformIO
 anyway) must be telling us that there something that's just plain missing
 from Haskell. 

Yes it's plain missing and for good reasons. There are many things plainly 
missing from Haskell besides global variables.

 IMO this is not a very satisfactory situation for a language 
 that's advertised as general purpose.

General purpose doesn't mean that any programming idiom is supported.

  Further, as for evidence or credible justification for the my claim,
  you can gather it from the numerous real-life examples I gave, and which
  you chose to ignore or at least found not worthy of any comment.

 I have no idea what examples you're talking about. Did you post any code?

No, I didn't post code. I already said it's annecdotal evidence. For instance, 
I was talking about using the ONC/RPC implementation on VxWorks, which is 
broken because they internally use thread-local mutual state.

 If so, I must have missed it for some reason. Perhaps your're refering
 to your elimination of unsafePerformIO from a library you were writing.

I wasn't.

  Of course,
  these examples are only annecdotal but I think this is better than a
  completely artificial requirement (like your oneShot).

 Being able to avoid the use of top level mutable state sometimes (or even
 quite often) is not proof that it's unnecessary, 

True. I have never claimed that, though. What I claimed is that in the cases 
where they are necessary, FFI is probably used anyway, so it *is* workable to 
use a foreign language wrapper.

In order to convince me that this is wrong you could present a (real-world, no 
artificial requirements) example that does not require the use of FFI anyway. 
If you can do so (which I doubt) I might be willing to accept a 
compiler-supported standard library routine with a very long and very ugly 
name like

warningBadStyleUseOnlyIfAbolutelyNecessary_performOnlyOnce: IO a - IO a

;-)

 especially when nobody 
 (other than yourself presumably) knows why you were using it in the first
 place. 

Not I. And it was for convenience only, as I proved by completely eliminating 
them without making the code any more complicated. I never claimed that this 
proves anything, it was just a personal experience.

BTW, the main reason they use global variable in C all the time is because 
it's just so damn convenient (at first) and *not* because there are problems 
otherwise unsolvable. (There are *very* few exceptions.)

 However, the existance of just one real world example where it does 
 appear unavoidable is pretty convincing evidence to the contrary IMO.

I agree that the alternatives are not a good *general* solution. I have been 
arguing that a general solution is not desirable.

  You have been asked more than once to present a *real-life* example to
  illustrate that
 
  (a) global variables are necessary (and not just convenient),
  (b) both above mentioned alternatives are indeed unworkable.

 I knew this would happen. I was asked to provide an example and I *did*.
 I gave the simplest possible example I had of the more general problem,
 and now this 

Re: [Haskell-cafe] Problem with overlapping class instances

2004-11-23 Thread Graham Klyne

At 22:05 22/11/04 +, Keean Schupke wrote:
The trick here is to use a type to represent the constraint rather
than a class, if possible.
   Keean
Hmmm, I'm not sure that I understand what you mean.
Considering my example (repeated below), is it that 'AtomicConcept' should 
be an algebraic datatype rather than just a type synonym?  Or is there more?

Or... I just found John Hughes 1999 paper on Restricted Data Types in 
Haskell [1], which talks about representing class constraints by the type 
of its associated dictionary.  Is this is what you mean?

#g
--
[1] http://www.cs.chalmers.se/~rjmh/Papers/restricted-datatypes.ps
spike-overlap-ConceptExpr.lhs
-
 type AtomicConcepts a  = [(AtomicConcept,[a])]
 type AtomicRoles a = [(AtomicRole   ,[(a,a)])]

 type TInterpretation a = ([a],AtomicConcepts a,AtomicRoles a)
 class (Eq c, Show c) = ConceptExpr c where
 iConcept  :: Ord a = TInterpretation a - c - [a]
...
 type AtomicConcept = String   -- named atomic concept
Declare AtomicConcept and AtomicRole as instances of ConceptExpr and RoleExpr
(AtomicRole is used by AL, and including AtomicConcept here for completeness).
 instance ConceptExpr AtomicConcept where
 iConcept = undefined
...
To allow a common expression to support multiple description logics,
we first define a wrapper class for DLConcept and DLRole:
 class ConceptExpr c = ConceptWrapper cw c | cw - c where
 wrapConcept :: c - cw c - cw c
 getConcept  :: cw c - c
Using this, a ConceptWrapper can be defined to be an instance of
ConceptExpr:
This is line 30:
 instance (ConceptWrapper cw c, ConceptExpr c) = ConceptExpr (cw c) where
 iConcept = iConcept . getConcept
Error message:
Reading file D:\Cvs\DEV\HaskellDL\spike-overlap-conceptexpr.lhs:
ERROR D:\Cvs\DEV\HaskellDL\spike-overlap-conceptexpr.lhs:30 - Overlapping 
inst
ances for class ConceptExpr
*** This instance   : ConceptExpr (a b)
*** Overlaps with   : ConceptExpr AtomicConcept
*** Common instance : ConceptExpr [Char]




Ralf Laemmel wrote:
Instance selection and thereby overlapping resolution
is *independent* of constraints. It is defined to be purely
syntactical in terms of instance heads. See the HList paper
for some weird examples.
Ralf
Graham Klyne wrote:
The reported overlapping instance is [Char], which I take to be derived 
from the type constructor [] applied to type Char, this yielding a form 
that matches (cw c).  But the instance ConceptExpr (cw c) is declared to 
be dependent on the context ConceptWrapper cw c, which has *not* been 
declared for the type constructor [].

GHCi with -fglasgow-exts is no more informative.
What am I missing here?


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe

Graham Klyne
For email:
http://www.ninebynine.org/#Contact
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problem with overlapping class instances

2004-11-23 Thread Keean Schupke
The problem is that (cw c) overlaps with String. It will still ovarlap 
if you use data decl.
it is the CW that needs to be a datatype. See Below:

   Keean.
Graham Klyne wrote:
Hmmm, I'm not sure that I understand what you mean.
Considering my example (repeated below), is it that 'AtomicConcept' 
should be an algebraic datatype rather than just a type synonym?  Or 
is there more?

Or... I just found John Hughes 1999 paper on Restricted Data Types in 
Haskell [1], which talks about representing class constraints by the 
type of its associated dictionary.  Is this is what you mean?

#g
--
[1] http://www.cs.chalmers.se/~rjmh/Papers/restricted-datatypes.ps
spike-overlap-ConceptExpr.lhs
-
 type AtomicConcepts a  = [(AtomicConcept,[a])]
 type AtomicRoles a = [(AtomicRole   ,[(a,a)])]

 type TInterpretation a = ([a],AtomicConcepts a,AtomicRoles a)
 class (Eq c, Show c) = ConceptExpr c where
 iConcept  :: Ord a = TInterpretation a - c - [a]
...
 type AtomicConcept = String   -- named atomic concept
Declare AtomicConcept and AtomicRole as instances of ConceptExpr and 
RoleExpr
(AtomicRole is used by AL, and including AtomicConcept here for 
completeness).

 instance ConceptExpr AtomicConcept where
 iConcept = undefined
...
To allow a common expression to support multiple description logics,
we first define a wrapper class for DLConcept and DLRole:
 class ConceptExpr c = ConceptWrapper cw c | cw - c where
 wrapConcept :: c - cw c - cw c
 getConcept  :: cw c - c
Do this:
data CW cw = CW cw
class ConceptWrapper cw c | cw - c
   wrapConcept :: c - (CW cw) c - (CW cw) c
   getConcept :: (CW cw) c - c
Using this, a ConceptWrapper can be defined to be an instance of
ConceptExpr:
This is line 30:
 instance (ConceptWrapper cw c, ConceptExpr c) = ConceptExpr (cw c) 
where
 iConcept = iConcept . getConcept

instance ConceptWrapper (CW cw) c,ConceptExpr c) = ConceptExpr ((CW cw) 
c) where

Error message:
Reading file D:\Cvs\DEV\HaskellDL\spike-overlap-conceptexpr.lhs:
ERROR D:\Cvs\DEV\HaskellDL\spike-overlap-conceptexpr.lhs:30 - 
Overlapping inst
ances for class ConceptExpr
*** This instance   : ConceptExpr (a b)
*** Overlaps with   : ConceptExpr AtomicConcept
*** Common instance : ConceptExpr [Char]

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Top Level TWI's again was Re: [Haskell] Re: Parameterized Show

2004-11-23 Thread Graham Klyne
At 10:02 23/11/04 +, you wrote:
Off topic, but interesting,
Sure... that's why its in 'cafe, right?
Someone else keeps quoting this at me... I prefer Knuth - paraphrased as I 
cant remember the quote - The best software projects are the ones where 
the source code has been lost about half way through the development and 
started from scratch.

The point is programmers start by exploring a problem space without 
understanding it. Poor programmers just accept the first solution they put 
down. Good programmers re-implement. Great programmers have a sixth sense 
of when things are about to get ugly, and start again (and the better you 
are the less you actually have to implement before you realise things can 
be refactored for the better)...

Graham Klyne wrote:
What's my point in all this?  I supposed it might be summed up as: The 
best is the enemy of the good.
Hmmm... I take your point, and I think my attempted pithy summary missed 
its intended target.  What I was trying to convey was a sense that a great 
language has to let merely average (or worse) programmers do a halfway 
decent job.  There aren't enough great programmers to go round.

And even great programmers sometimes have to work with someone else's 
codebase (which even if written by a great programmer may have had diffent 
goals in mind).

(FWIW, I think Python is a language that scores pretty highly on this count.)
#g

Graham Klyne
For email:
http://www.ninebynine.org/#Contact
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Problem with overlapping class instances

2004-11-23 Thread Graham Klyne
At 16:16 23/11/04 +, Keean Schupke wrote:
The problem is that (cw c) overlaps with String. It will still ovarlap if 
you use data decl.
it is the CW that needs to be a datatype. See Below:
Thanks.  I've massaged that into something that compiles (copy below).
I think I see why this works, but I still can't say that I find the
revised structure entirely intuitive.   I think the key feature is that
wrapped instances of ConceptExpr are distinguished from native instances
by the type constructor CW.  That is, each instance type is distinguished
by different known type constructor -- in this case, [] and CW.
I need to noodle on the a while to see if the pattern fits my application,
but I think I get the general idea.  Of all the combinations I thought about
making the type constructor part of the class method signatures was not
one I'd tried.
Also, I think declaring CW as a 'newtype' rather than 'data'
captures the intent more directly.
#g
--
spike-overlap-ConceptExpr-datatyped.lhs
---
Some given type and class declarations:
 type AtomicConcepts a  = [(AtomicConcept,[a])]
 type AtomicRoles a = [(AtomicRole   ,[(a,a)])]

 type TInterpretation a = ([a],AtomicConcepts a,AtomicRoles a)

 class (Eq c, Show c) = ConceptExpr c where
 iConcept  :: Ord a = TInterpretation a - c - [a]

 type AtomicConcept = String   -- named atomic concept
 type AtomicRole= String   -- named atomic role
Declare AtomicConcept as base instance of ConceptExpr.
 instance ConceptExpr AtomicConcept where
 iConcept = undefined
To allow a common expression to support multiple description logics,
define a wrapper type and class for DLConcept and DLRole:
 newtype CW cw c = CW cw deriving (Eq,Show)
 class ConceptWrapper cw c | cw - c where
wrapConcept :: c - (CW cw c) - (CW cw c)
getConcept :: (CW cw c) - c
Using this, a ConceptWrapper can be defined to be an instance of
ConceptExpr:
 instance (Eq cw, Show cw, ConceptWrapper cw c,ConceptExpr c) =
 ConceptExpr (CW cw c) where
 iConcept i   = iConcept i . getConcept
Now declare a pair containing a ConceptExpr to be an instance of
ConceptWrapper:
 type Wrap d c = (c,d)

 instance ConceptWrapper (Wrap d c) c where
 wrapConcept c (CW (_,d)) = CW (c,d)
 getConcept(CW (c,_)) = c


Graham Klyne
For email:
http://www.ninebynine.org/#Contact
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Top Level TWI's again

2004-11-23 Thread Benjamin Franksen
[for the third time moving this discussion to cafe]

On Tuesday 23 November 2004 20:20, Aaron Denney wrote:
 [...about std file handles...]
 They're wrappers around the integers 0, 1, and 2.  The handles could
 have been implemented to be the same, at each invocation.  (I expect
 they are in most implementations).  If we had to make them ourselves,
 they could be done as:

 stdin = makeHandle 0
 stdout = makeHandle 1
 stderr = makeHandle 2

 in absolutely pure Haskell, only the things that manipulate them need
 be in the IO monad.

If they were simple wrappers around the integers, you'd be right and I 
couldn't rightfully object to them being top-level values.

I don't like it but I have to admit that (although hypothetical) this 
invalidates the argument I gave against them being top-level things ;-(

My only rescue is to shift the blame on the OS. Indeed it is quite debatable 
whether the raw file descriptor API is a good one. Does it make sense that 
you can, e.g. swap stdin and stdout? It doesn't seem right to me that file 
descriptors are reused at all.

I think file handles should be completely abstract. Also I would rather have 
separate types for Input, Output, and RandomAccess Streams, although they 
might of course share some methods via type classes. I am currently taking a 
look at Simon Marlow's new_io library that seems to do just that (and a lot 
more).

  Keeping them outside the IO monad, and only accessing them inside --
  i.e. the current situation -- would be fine.
 
  I beg to differ. Note, I do not claim they are unsafe.

 If it's not unsafe, and it makes for simpler (hence easier to
 understand, create, debug, and modify) in what sense is it not fine?

I don't buy the easier to understand, create, debug, and modify part if it 
comes to global variables. Not everything that makes things simpler at first, 
is also good for long-term maintenance, and global variables have accumulated 
quite a bad reputation in this regard.

  They're not mutable in any sense.
 
  Well, a variable in C is not mutable in exactly the same sense: It always
  refers (=points) to the same piece of memory, whatever value was
  written to it. Where does that lead us?

 A slightly different sense, but I won't quibble much.

Assume a completely type-safe version of C, if that makes it clearer.

 It would lead us to being able to have TWIs, only readable or writeable
 in the IO Monad.  Many people don't think that would be such a bad
 thing.  But because of the semantics we expect from IORefs, we can't
 get them without destroying other properties we want.

 a = unsafePerformIO $ newIORef Nothing

 Respecting referential integrity would give us the wrong semantics.
 Adding labels would force the compiler to keep two differently labeled
 things seperate,

Hmm. That sounds as if the global variables problem is equivalent to the 
problem of unique name supplies: With global variables (let's say top-level 
MVars) we can easily implement a unique name supply. And with a unique name 
supply we could (in principle, at least) define newMVar and newIORef as pure 
functions taking some unique label as an argument. (QED)

I have just taken a closer look at 'linear implicit parameters' because they 
are supposed to be good for unique name supplies. It seems they are even 
worse than the 'normal' implicit parameters. Considering the above argument 
this doesn't surprise me much. That means we are back to using the IO Monad 
to create unique labels, so all this doesn't gain us anything, at least not 
on the practical level.

 In contrast with IO Handles, there the OS does all the work.  If
 makeHandle were exposed to us, It really wouldn't matter whether

 handle1 and stdout

 handle1 = makeHandle 1
 stdout = makeHandle 1

 were beta-reduced or not.  Either way, the OS naturally handles how we
 refer to stdout.

I see the difference and I agree. However, you can already do this using the 
Posix package:

makeHandle :: Int - System.Posix.Fd
makeHandle = fromIntegral

but note that

handleToFd :: Handle - IO Fd
fdToHandle :: Fd - IO Handle

are both in the IO Monad.

BTW, the man page for stdin etc. says: Note that mixing use of FILEs and raw 
file descriptors can produce unexpected results and should generally be 
avoided.

 (One caveat here -- buffering implemented by the compiler  runtime
 would make a difference.)

I think this is done neither by compiler nor RTS but by a low-level library. 
Anyway, it seems to make a difference, as witnessed by the above signatures.

Ben
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe