Re: Typesafe MRef with a regular monad

2003-06-13 Thread Carl R. Witty
Keith Wansbrough [EMAIL PROTECTED] writes:

  In article [EMAIL PROTECTED],
   [EMAIL PROTECTED] (Carl R. Witty) wrote:
  
   Here's a hand-waving argument that you need either Typeable (or
   something else that has a run-time concrete representation of types)
   or ST/STRef (or something else, probably monadic, that can track
   unique objects) to do this.
  
  George Russell already showed this, didn't he? You can implement 
  Typeable given type-safe MRefs, and you can implement type-safe MRefs 
  given Typeable.
 
 But George Russell's implementation relied on looking up something in
 one map with a key obtained from another map.  I thought type-safe
 MRefs should disallow this.

If you use Simon PJ's type signatures, you can't really disallow using
a key from one map with another map.

Carl Witty
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Typesafe MRef with a regular monad

2003-06-12 Thread Carl R. Witty
Simon Peyton-Jones [EMAIL PROTECTED] writes:

 | Conjecture: It's impossible to implement RefMonad directly in Haskell
 | without making use of built-in ST or IO functionality and without
 unsafe or
 | potentially diverging code (such as unsafeCoerce).
 
 A more concrete way to formulate a problem that I believe to be
 equivalent is this.  Implement the following interface
 
module TypedFM where
   data FM k   -- Abstract; finite map indexed by keys
 of type k
   data Key k a-- Abstract; a key of type k, indexing a
 value of type a
  
   empty :: FM k
   insert :: Ord k = FM k - k - a - (FM k, Key k a)
   lookup :: Ord k = FM k - Key k a - Maybe a
 
 The point is that the keys are typed, like Refs are.  But the finite map
 FM is only parameterised on k, not a, so it can contain (key,value)
 pairs of many different types.
 
 I don't think this can be implemented in Haskell, even with
 existentials.  But the interface above is completely well typed, and can
 be used to implement lots of things.  What I *don't* like about it is
 that it embodies the finite-map implementation, and there are too many
 different kinds of finite maps.
 
 There is, I guess, some nice language extension that would enable one to
 program TypedFM, rather than provide it as primitive; but I don't know
 what it is.

Here's a hand-waving argument that you need either Typeable (or
something else that has a run-time concrete representation of types)
or ST/STRef (or something else, probably monadic, that can track
unique objects) to do this.  (Warning: there may be Haskell syntax
errors in the following; I haven't written Haskell for a while.)

Consider the following:

  let (m, _) = insert empty key val
  (_, k1) = insert empty key 'V'
  (_, k2) = insert empty key val
  in (lookup m k1, lookup m k2)

This gives a value of type (Maybe Char, Maybe String); I think the
intended semantics are that the value should be (Nothing, Just val).
(Maybe the first lookup should throw an exception instead of returning
Nothing?  Surely it should not return (Just 'V') or 
(Just ((unsafeCoerce# 'V') :: String)).  (The second lookup must
return (Just val); otherwise, you're breaking referential
transparency.)

Now, imagine what the map looks like in memory.  Suppose it is just a
standard association list.  The key cannot be just the string key,
or the pair (key,1) (where 1 means it's the first thing inserted
into the map); it must somehow depend on the type of the value.
Hence, you need to use Typeable or some other run-time representation
of types.

The above argument convinces me that you need run-time types to
program TypedFM.  The problem in the above example is that you can
create a key in one map and attempt to use it in another map.  If you
could prohibit such an attempt (or even detect it), then you wouldn't
need Typeable.

But what does it mean to use a key in the same map?  In

  let (m1, k) = insert empty key val
  (m2, _) = insert m1 key2 val2
  in lookup m2 k

you must allow the lookup of k in m2, even though m1 and m2 are
different maps (one has a key key2 and the other doesn't).  Somehow,
a key must be usable with the map it was created in, and with
descendents of that map, but not with siblings of that map.  In an
impure language, this could be done with some sort of tag on the map,
but I don't see how to do so in a pure language.

Following the above chain of reasoning, we could create an version of
TypedFM that was based in the IO monad.  There would be a global
variable that you could use to dole out TypedFM tags; each map that
you created (in the IO monad, of course) would have a unique tag, and
keys of a map would have the same tag.  You could only use the maps in
a single-threaded fashion (i.e., within the IO monad); otherwise there
would be sibling maps and we would be back to needing Typeable.

We could try to associate keys with their maps in some other way.  For
instance, we could use the ST approach, and give keys and their maps
a dummy type parameter that would have to match.  Even then, we would
need something monad-like to single-thread the maps.

To summarize, I believe that you need either a run-time representation
of types (like Typeable) or some way to single-thread access to your
map objects (like the ST monad) to implement something like TypedFM.
We've seen an implementation using Typeable/Dynamic.  An
implementation in the ST monad might be more difficult, depending on
the intended semantics of TypedFM (what's supposed to happen if you
insert the same key with values of different types?); but for my
imagined uses of TypedFM, you could just use the ST monad with STRef's
directly.

If Typeable is necessary (and Dynamic is sufficient), or the ST monad
is necessary (and the ST monad is sufficient), then that doesn't leave
a lot of room for nice new language extensions (unless somebody can
come up with some way to single-thread objects that's even more clever
than 

Re: thread blocked indefinitely

2003-06-06 Thread Carl R. Witty
Simon Marlow [EMAIL PROTECTED] writes:

  I'm now `GHC.Conc.forkProcess`ing only from the initial 
  thread, and all   
  seems well.  Thanks for the suggestion!
   
  Any idea when `forkProcess` might get fixed?  Don't hurry on my
  account; I'm just curious.
 
 There's a comment in the code from Sigbjorn who tried to fix it and was
 unsuccessful, so it's probably a non-trivial fix.  I'll take a look next
 time I'm in the area, unless it's a complete show-stopper for anyone.
 
 Cheers,
   Simon

I was just reading through old e-mail and saw this.  The state of
affairs described here sounds like a nasty bug waiting to bite.

Would it be better to change forkProcess to detect whether it's being
called from the initial thread, and give some sort of error (GHC
internal error: forkProcess must only be called from the initial
thread) if not?

Carl Witty
___
Glasgow-haskell-bugs mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs


Re: Field labels must be globally unique?

2003-01-14 Thread Carl R. Witty
Marc Ziegert [EMAIL PROTECTED] writes:

 It would be nice to be able to overload class-functions like 
classes:
 
 instance (+), (-) - Vector where
 (+) v1 v2 = ...
 (-) v1 v2 = ...
 
 instead of overloading parts of a class... (because of 
runtime-errors!)

This seems related to System CT
(http://www.dcc.ufmg.br/~camarao/CT/).  In System CT, everything is
(potentially) overloaded, and you don't have explicit type classes.

Carl Witty
[EMAIL PROTECTED]
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: The Haskell 98 Report

2002-12-03 Thread Carl R. Witty
Claus Reinke [EMAIL PROTECTED] writes:

 So, as a small token, I've revised my original plan and will now buy one
 of the printed versions (I shall also place higher priority on submitting
 to JFP in the future;-). Let's support forward-looking publishers!
 
 Thanks, Simon, and thanks, Conrad Guettler  CUP!

I agree totally (I wasn't going to buy a copy, but now I will).

Please let us know when the book becomes available through online
bookstores...

Thanks, Simon and Conrad!

Carl Witty
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: GetMBlock: misaligned block

2002-09-11 Thread Carl R. Witty

Simon Marlow [EMAIL PROTECTED] writes:

 The problem is that GHC is asking for memory at a particular address
 (0x5000) and the kernel is returning memory elsewhere that doesn't
 satisfy our aligment constraints (1M aligned).  We don't particularly
 care where we get memory from, but it must be properly aligned.

Can you just mmap more memory than you need (an extra megabyte) and
unmap the portions that aren't correctly aligned?  (To avoid the
overhead, you could do this only when a correctly-sized mmap gave you
unaligned memory.)

Carl Witty
___
Glasgow-haskell-bugs mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs



Re: typeclass relations

2002-09-11 Thread Carl R. Witty

S.M.Kahrs [EMAIL PROTECTED] writes:

 The class checker for the above is like a little Prolog program:
 
 foo(int).
 bar(int).
 foo(char).
 bar(X) :- foo(X).

So, the type system for C++ lets you encode (some) Haskell programs
and the type system for Haskell lets you encode (some) Prolog
programs.  Now somebody needs to come up with a type system for Prolog
that lets you encode C++ programs.

Carl Witty
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: problems with FFI including h files

2002-06-10 Thread Carl R. Witty

Alastair Reid [EMAIL PROTECTED] writes:

  I thought we established that generating valid C prototypes from the
  Haskell FFI type signature wasn't possible due to the incompleteness
  of the Haskell type (lack of 'const' modifiers for one thing - is
  there anything else?).
 
 Compilers use the same calling sequence whether you tell them about
 const or not.  
 
 I believe that we can call all C functions correctly knowing only the
 calling convention (ccall, stdcall, etc) and the Haskell type.  I
 certainly hope this is true since Hugs' implementation of wrappers
 depends on it.
 
 (Hmmm, some calling conventions do funny things when passing and
 returning small structs.  I don't know much about this though...)

I think some ABI's specify a different calling convention for varargs
functions than non-varargs functions.  (I remember reading this on
some Haskell mailing list some time back, one of the previous times
this same discussion occurred.)

Carl Witty
___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: Strictness!

2002-03-18 Thread Carl R. Witty

Jay Cox [EMAIL PROTECTED] writes:

 On Thu, 14 Mar 2002, Brian Huffman wrote:
 
  In Haskell you can produce the desired behavior by using pattern guards.
  Since the pattern guards always get evaluated before the result does, they
  can be used to make things more strict. Here is the foldl example:
 
  strict x = seq x True
 
  foldl' :: (a - b - a) - a - [b] - a
  foldl' f z [] = z
  foldl' f z (x:xs)
| strict z = foldl' f (f z x) xs
 
  You can make multiple arguments strict by using  or similar, e.g.:
 
  f x y | strict x  strict y = ...
  f x y | all strict [x,y] = ...
 
  Of course, with the second example x and y must have the same type.
  Hope this helps.
 
  - Brian Huffman
 
 Thanks! that looks like a great haskell idiom to use!
 
 Jay Cox

I also like this for trace:

f x y
  | trace (show (x, y)) True
  = ...

(Putting the trace on a separate line like that makes it easy to
comment out.)

Or, if you've got a function with a lot of cases, you can trace all of
them by adding:

f x y | trace (show (x, y)) False = undefined

above all the other cases.

Carl Witty
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: H98 Report: expression syntax glitch

2002-03-01 Thread Carl R. Witty

Simon Peyton-Jones [EMAIL PROTECTED] writes:

 I didn't phrase it right.   I meant that a let/lambda/if always
 extends to the next relevant (not part of a smaller expression)
 punctuation symbol; and if that phrase parses as an exp
 that's fine, otherwise it's a parse error.  So I should not really
 speak in terms of 'ambiguity'.
 
 Perhaps we can simply say that 
   let .. in exp
 is legal only if the phrase is followed by one of the punctuation
 symbols.  That's nice, because we don't need to talk of
 not part of a smaller expression.
 
 So (let x = 10 in x `div`) would be rejected because 
   x `div`
 isn't a exp.

If you're going to modify the syntax in the report to match what
implementors actually implement, you may also want to change the
illegal lexeme definition for closing implicit layout.  I believe
that 
  do a == b == c
is (according to the Standard) legal syntax that means
  (do {a == b}) == c
but I'll bet that most if not all Haskell parsers would get it wrong.

Carl Witty
___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: syntax...(strings/interpolation/here docs)

2002-02-17 Thread Carl R. Witty

Claus Reinke [EMAIL PROTECTED] writes:

 Haskell definitely supports abstraction and composition, so we can 
 factor out application aspects (not just text) that need localisation, 
 and link them (dynamically?) with the main parts of our applications. 
 Some systematic approach would be useful, but apart from keeping 
 track of the issues raised in the standards committees, I don't see 
 why Haskellers should limit themselves to the standard way of 
 patching C#/Java apps with translated text fragments.

I think there is a good reason to use standard localisation methods;
it makes it cheaper/more likely to happen.  It sounds like you're
advocating localisation methods which would require the translators to
know Haskell; this would make hiring translators more expensive (for a
commercial proposition) or significantly reduce your pool of
volunteers (if you rely on volunteer translators).

Carl Witty
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: GHC Poll: scope in GHCi

2002-01-09 Thread Carl R. Witty

Simon Marlow [EMAIL PROTECTED] writes:

 Ok, so in general a 'scope' can be constructed by combining:
 
1. the full top-level scope from zero or more *interpreted* modules
2. the exports of zero or more modules (interpreted or compiled)
3. any temporary bindings made on the command line

I'd like to suggest one more thing that it would be nice to have in
scope.  I would like every loaded module to be imported qualified
into the scope.  (In fact, I would prefer if this were the default,
although I suppose it might be useful to have a way to turn it off.)

Carl Witty

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: ghc --make feature request

2001-10-26 Thread Carl R. Witty

Simon Marlow [EMAIL PROTECTED] writes:

 GHC actually has rather sophisticated recompilation checking which
 goes beyond just checking whether the interface changed - it keeps
 version information for each entity exported by a module and only
 recompiles if any of the entities actually used by the module have
 changed (this is described in the user's guide under the section on
 recompilation checking).

I've seen unexpected compiles using ghc --make.  I've got a system
with modules A, B, and C; A depends on B and B depends on C.  I've
seen the following sequence of events:

I change C
ghc --make A compiles C, B, and A
I change A
ghc --make A skips C and compiles B and A

I don't know of any reason why it would have compiled B in the second
case.

If this is not a known bug, I can try to reproduce it and submit a
formal bug report.

Carl Witty

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: rank 2-polymorphism and type checking

2001-10-24 Thread Carl R. Witty

Simon Peyton-Jones [EMAIL PROTECTED] writes:

 So I'm interested to know: if GHC allowed arbitrarily-ranked types, who
 would use them?

I can't promise that I would use them, but it would certainly give me
warm fuzzy feelings to know that they were there. :-)

On the other hand, I believe that you can construct in GHC a type
which is isomorphic to any arbitrarily-ranked type (with any
combination of existential and universal quantification) by creating
new data constructors.  If this is true, then I'm reasonably content
with that.

Carl Witty

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Monomorphism, monomorphism...

2001-10-10 Thread Carl R. Witty

Marcin 'Qrczak' Kowalczyk [EMAIL PROTECTED] writes:

 09 Oct 2001 13:55:04 -0700, Carl R. Witty [EMAIL PROTECTED] pisze:
 
  The TREX paper from Mark Jones and Benedict Gaster (I hope I
  have the names right) had both extensible records and extensible
  variants (extensible variants being what you would need to implement
  downcasts),
 
 I don't think so. Here is how the requirement can be formulated:
 
 We need some type T such that it's possible to define a family of
 functions for arbitrary choices of A:
 upA   :: A - T
 downA :: T - Maybe A
 satisfying downA (upA a) = Just a. We want to choose the type for T
 before deciding the exact set of types for A.

It is true that extensible variants do not give you a single, closed
type T with this property.  

However, I think it is likely that any program you wanted to write
using these functions could instead be written in a system with
extensible variants.  Functions would be polymorphic over the exact
variant type, and the typechecker would stitch together all the
requirements on the variant type so that when compiling main, it
could decide what the actual type was.  (I say likely because I'm
not certain that the type inference described in the Gaster/Jones
paper would work the way I want it to, since I've never used a system
that implemented extensible variants.)

Carl Witty


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: HOpengl on Ghc 5.02

2001-10-09 Thread Carl R. Witty

Nicolas [EMAIL PROTECTED] writes:

 Hi there,
 Sorry for this stupid question:
 Is there a distrib of a HOpenGl package working with ghc 5.02. I tried
 the CVS but don't manage to make it work (ghc 5.03 panic).
 Can someone help me?

I got HOpenGL to work without trouble.  On September 29, I checked out
the ghc-5-02 branch from CVS, configured it with --enable-hopengl, and
built and installed it.  (I actually built a Debian package, using the
Debian packager's build scripts.)

I doubt if it matters, but I'm using HOpenGL with gtk+hs and gtkglarea
instead of Glut.

Carl Witty

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: Monomorphism, monomorphism...

2001-10-09 Thread Carl R. Witty

Marcin 'Qrczak' Kowalczyk [EMAIL PROTECTED] writes:

 Since OO languages often use subtypes to emulate constructors of
 algebraic types, they need downcasts. In Haskell it's perhaps less
 needed but it's a pity that it's impossible to translate an OO scheme
 which makes use of downcasts into Haskell in an extensible way
 (algebraic types are closed).

I agree.  The TREX paper from Mark Jones and Benedict Gaster (I hope I
have the names right) had both extensible records and extensible
variants (extensible variants being what you would need to implement
downcasts), but only the extensible records part of the paper was
implemented in Hugs.

Carl Witty

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: newtype | data

2001-10-05 Thread Carl R. Witty

Mark Carroll [EMAIL PROTECTED] writes:

 Why does newtype exist, instead of letting people always use data and
 still get maximum efficiency? After all, surely the implementation is an
 implementation detail - a compiler could see the use of data with a
 unary constructor and implement it as it does newtype, instead of making
 the programmer worry about how things are actually represented?
 
 I'm obviously missing something obvious here; I'm hoping to learn what.
 (-:

newtype is strict; data is not.

Given 

 data T1 = T1 Int
 newtype T2 = T2 Int
 data T3 = T3 !Int

you get the following results.

Data (T1 undefined) `seq` ()
()
Data (T2 undefined) `seq` ()
*** Exception: Prelude.undefined
Data (T3 undefined) `seq` ()
*** Exception: Prelude.undefined

I can't think of a semantic difference between newtype and data with a
single unary strict constructor; I suppose it would be possible to
remove newtype from the language and make people declare things like
T3 instead.

Carl Witty

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Strange error in show for datatype

2001-10-03 Thread Carl R. Witty

Bjorn Lisper [EMAIL PROTECTED] writes:

 data LispList t = Atom t | LispList [LispList t] | Str [Char]
 
 instance Show t = Show (LispList t) where
 show (Atom t) = show t
 show (LispList t) = show t
 show (Str t) = show t
 
 hugsprompt  (LispList [Atom 1, Str HEJ]) == [1,HEJ]
 hugsprompt  (LispList [Str HEJ,Atom 1]) == Cannot find show function
 
 So there is a problem when the value is of form Str string or where such a
 value is first in the list l in a value of the form LispList l. Oddly
 enough, such values may appear at other positions without causing any
 problems.

Are you sure about that?  I can't reproduce the above results in hugs
(Hugs 98, February 2000) or ghci (5.02).  I get a much simpler answer:
if the s-expression includes an Atom term, it works; otherwise you get
a type error.  In particular, for the second example above
(LispList [Str HEJ, Atom 1])
both ghci and hugs produce the expected result, rather than failing.

Carl Witty

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: RFC: GUI Library Task Force

2001-09-24 Thread Carl R. Witty

Manuel M. T. Chakravarty [EMAIL PROTECTED] writes:

 + More sophisticated approaches (that often require
   language extensions or are still experimental) can be
   implemented on top of this basic API - eg, FranTk,
   Yahu, Fruit, iHaskell, etc.

I keep seeing references to Fruit (as a functional Haskell GUI) but
I can't find any real information about it.  Is there a web page for
it?

Carl Witty

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: unsafePtrCompare, anybody?

2001-09-17 Thread Carl R. Witty

Leon Smith [EMAIL PROTECTED] writes:

 However, in this situation, pointer comparison is simply an arbitrary total 
 order on the set of all atoms, which is all we need to implement finite maps 
 based on search trees.  And of course, pointer comparisons are a much cheaper 
 operation that actual string comparison.

You could just add an extra Int sequence number to your Atoms, and
compare using that.

Carl Witty

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: GHC FFI Return Type Bug

2001-08-07 Thread Carl R. Witty

Sigbjorn Finne [EMAIL PROTECTED] writes:

 Julian Seward (Intl Vendor) [EMAIL PROTECTED] writes:
  
  Hmm, we're looking at this.  However, I don't really know what
  C is or is not supposed to do here.  Given
  
  char fooble ( ... )
  {
 return 'z';
  }
  
  on an x86, 'z' will be returned at the lowest 8 bits in %eax.
  What I don't know is, is the C compiler obliged to clear the
  upper 24 bits of %eax, or does that onus fall on the callee?
 
 Yes, the callee is required to narrow expr in 'return expr;' to
 fit that of the specified return type -- see 9.8 of Harbison and
 Steele. So, a C compiler that cause f() to return 0x7fff for
 the following,
 
 unsigned char g()
 {
 return 0x7fff;
 }
 
 unsigned int f()
 {
 return g();
 }
 
 is in the wrong. [Notice that narrowing for signed integral types
 is undefined in ISO C, but most current-day compilers implement
 such narrowing ops the same way, i.e., by masking off excess bits.]

It's certainly true that an ISO C compiler on a typical machine must
return 255 from f() (on an atypical machine, it's possible to have
unsigned char be a 32-bit type).  However, this is essentially
unrelated to the question of whether the x86 ABI allows g() to return
a value in %eax that has the upper 3 bytes non-zero.

When I compile the following file:
 abitest.c 
unsigned int g_val;

unsigned char g()
{
return g_val;
}

unsigned int f()
{
return g();
}
---

with the command
gcc -Wall -O2 -fomit-frame-pointer -S abitest.c

I get the output:
 abitest.S 
.file   abitest.c
.version01.01
gcc2_compiled.:
.text
.align 16
.globl g
.typeg,@function
g:
movzbl g_val,%eax
ret
.Lfe1:
.sizeg,.Lfe1-g
.align 16
.globl f
.typef,@function
f:
call g
andl $255,%eax
ret
.Lfe2:
.sizef,.Lfe2-f
.comm   g_val,4,4
.ident  GCC: (GNU) 2.7.2.3
---

You can see that the code for f is:
call g
andl $255,%eax
ret
So gcc believes that a function which returns a value of type unsigned
char is not responsible for clearing the high 3 bytes of %eax.

Carl Witty

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: Why is there a space leak here?

2001-06-11 Thread Carl R. Witty

S. Alexander Jacobson [EMAIL PROTECTED] writes:

 On 6 Jun 2001, Carl R. Witty wrote:
 
  S. Alexander Jacobson [EMAIL PROTECTED] writes:
 
   For example w/ foldl:
  
   foldl + 0 [1..1]
   foldl (+) ((+) 0 1) [2..1]
   foldl (+) ((+) ((+) 0 1) 2) [3..1]
  
   Can't the implementation notice that each iteration leads to a
   larger closure and, if it is running out of space go ahead an just
   evaluate (+) 0 1?
 
  It's complicated.  You can't (in general) know whether application of
  a function will increase or decrease the space used.  If you were
  running out of space, would you just search the whole unevaluated
  program graph for reductions which somehow seemed likely to reduce
  the space used?  Would you add such reduction nodes to some global
  list at the time they were created?
 
 I'm not clear why you can't in general notice that you are using
 more space after function application than before.  I it hard to see why a
 program couldn't do the analysis I just did on foldl.

I wasn't worried about foldl; you assumed that (+) 0 1 got smaller if
you carried out the application.  Even for (+) on Integer, this is not
guaranteed (for large integers, if something else happens to be
holding on to the summands, evaluating the addition can increase total
space usage).

 You could accumulate statistics on funtions that increase/decrease space
 used at runtime and evaluate those that do reduce space used...

Right, that's the sort of thing I meant about likely above.  But how
do you find such function applications in the global program graph, if
you seem to be running low on space?  (And you also need to realize
that some functions might usually have small outputs, and sometimes
have large outputs.)

  One portable way to implement a memoizing function in Haskell (if the
  domain of the function is countable) is to lazily build a data
  structure that contains the results of the function on every possible
  argument.  Then you evaluate the portions of the data structure that
  you need; the result on each argument is only evaluated once.  This
  probably would count as a growing expression, and it's certainly
  possible that the function on some arguments would be bottom.
 
 I don't think I understood this.  Can you clarify?

Let me know if JCAB's response wasn't enough here.

Carl Witty

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Why is there a space leak here?

2001-06-06 Thread Carl R. Witty

S. Alexander Jacobson [EMAIL PROTECTED] writes:

 For example w/ foldl:
 
 foldl + 0 [1..1]
 foldl (+) ((+) 0 1) [2..1]
 foldl (+) ((+) ((+) 0 1) 2) [3..1]
 
 Can't the implementation notice that each iteration leads to a
 larger closure and, if it is running out of space go ahead an just
 evaluate (+) 0 1?

It's complicated.  You can't (in general) know whether application of
a function will increase or decrease the space used.  If you were
running out of space, would you just search the whole unevaluated
program graph for reductions which somehow seemed likely to reduce
the space used?  Would you add such reduction nodes to some global
list at the time they were created?

 I realize that there is a risk of evaluating _|_ unnecessarily, but if you
 are otherwise going to run out of memory, you might as well give it a
 shot.
 
 In practice, how often do you expect to see growing expressions that cover
 a _|_ that are not actually an error in any case?

It's certainly possible.

One portable way to implement a memoizing function in Haskell (if the
domain of the function is countable) is to lazily build a data
structure that contains the results of the function on every possible
argument.  Then you evaluate the portions of the data structure that
you need; the result on each argument is only evaluated once.  This
probably would count as a growing expression, and it's certainly
possible that the function on some arguments would be bottom.

 Hunting down memory leaks is already so obscure, that you might as well
 take a shot at solving the problem automatically...
 
 Alternatively, is there some magical way of warning about leaky
 expressions at compile time?  You don't have to ban them, but it would be
 nice if the programmer were aware of which parts of the code are likely to
 grow...

In general, this problem is uncomputable.  It might be possible to
come up with some useful approximation, but I bet that's a very
difficult research problem.

Carl Witty

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Endangered I/O operations

2001-05-23 Thread Carl R. Witty

Simon Marlow [EMAIL PROTECTED] writes:

 You obtain the ordering properties by setting the handle to NoBuffering,
 otherwise you get buffered input/output.  Wouldn't it be deviating from
 the report to do extra flushing in the buffered case?  (this is
 something of a technicality, actually we already do non-report flushing
 in several cases and our line-buffered input isn't line-buffered at
 all).

If the report does not allow the implementation to flush buffers at
any time, I would call that a bug in the report.  I would much rather
use an implementation where stdout and stderr came out in the right
order, and reading from stdin flushed stdout.  (As another example, an
implementation might want to flush all buffers before doing a fork(),
to avoid duplicated output.)

The only caveat is that if such flushing is allowed but not required,
it might encourage writing sloppy, nonportable code.

Carl Witty

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: Templates in FPL?

2001-05-22 Thread Carl R. Witty

D. Tweed [EMAIL PROTECTED] writes:

 In my experience the C++ idiom `you only pay for what you use' (==
 templates are essentially type-checked macros) and the fact most compilers
 are evolved from C compilers makes working with templates a real pain in
 practice.

I'm not sure what you mean by type-checked here.  Templates are not
type-checked at definition time, but are type-checked when they are
used; the same is true of ordinary macros.

Carl Witty

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Templates in FPL?

2001-05-22 Thread Carl R. Witty

Jerzy Karczmarczuk [EMAIL PROTECTED] writes:

 We know that a good part of top-down polymorphism (don't ask me what
 do I mean by that...) in C++ is emulated using templates.
 
 Always when somebody mentions templates in presence of a True Functionalist
 Sectarian, the reaction is What!? Abomination!!.
 
 Now the question: WHY?
 
 Why so many people say the C++ templates are *wrong* (and at the same time
 so many people use it every day...)

I agree with Marcin about the bad points of templates.  (I'd like to
point out, though, that the idea that adding new code can't change
the meaning of a program no longer holds in Haskell extended with
overlapping type classes.)

 Is it absolutely senseless to make a functional language with templates?
 Or it is just out of fashion, and difficult to implement?

There are some very cool things about C++ templates; they are more
powerful than you might suspect.  (Note: the following summarizes and
paraphrases several papers I've read on the topic; let me know if you
want more information and I'll look up references for you.)

Basically, templates let you extend the compiler.  You get a primitive
functional programming language, with which you can do basically
arbitrary computations on types and on integers; this language can be
used to generate special-purpose code.  One way to look at this is as
a form of partial evaluation (although there's nothing which
automatically decides which computation to do at compile time and
which at run time).

In the functional programming language, you have conditionals, the
usual arithmetic operations, pairing, recursion, etc.; basically, it's
a real programming language (although a very annoying one -- the
syntax is atrocious).

Here are a few examples of the kind of thing you can do with C++
templates:

* Compute log base 2 at compile time.  You can write a template such
that if n is a compile-time constant, Log2n::val is its log base 2.

* Generate FFT routines.  You can write a template such that if n is a
compile-time constant which is a power of 2, then FFTn::do_fft() is
a routine which computes an FFT on an array of size n, as a single
chunk of straight-line code (no loops).  (Not necessarily useful -- it
generates a lot of code for reasonable-sized arrays, and cache effects
probably mean that something with loops is more efficient -- but cool
nonetheless.)

* Implement lambda (over a subset of full C++).  You can make
lambda(X, a*(X+b)) do the right thing (basically by arranging for
the expression a*(X+b) to have a type like 
TimesNum, PlusPlaceholderX, Num 
and then writing a function using recursion over this type).

These techniques are useful when you're going for speed and
generality; I would certainly imagine that such things could be useful
in a functional programming language as well.  In fact, over the past
few weeks, people on the Haskell mailing lists have been trying to do
vaguely similar kinds of compile-time computation at the type level,
using type classes and functional dependencies.

On the other hand, C++ templates have huge flaws for this kind of
thing.  As I mentioned, the syntax is atrocious; also, there are lots
of things you would like to do which are apparently just out of reach
(the template language is not quite powerful enough).

I'd like to see somebody make a serious effort to add this kind of
compile-time processing to Haskell.  It might not look anything like
templates; something like OpenC++ and OpenJava might be better.
(These are C++ and Java compilers which are extensible; you can write
compiler extension modules using a standard API, and the compiler
dynamically loads your extensions.)  

There is a big reason why C++ templates (or any other form of
compile-time arbitrary computation on types) would be hard to add to
Haskell: type inference and polymorphism.  For C++ templates, you need
to know the argument types before you can do the template processing,
and you cannot know the type of the result until the processing is
done.  Probably this facility would only be useful for monomorphic
parts of your program, which might mean it's not appropriate for
Haskell at all.

Carl Witty

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Happy and Macros (was Re: ANNOUNCE: Happy 1.10 released)

2001-05-14 Thread Carl R. Witty

Manuel M. T. Chakravarty [EMAIL PROTECTED] writes:

 I didn't say that this works for any kind of parser
 combinator, I merely said that it works Doitse's and mine.
 Both implement SLL(1) parsers for which - as I am sure, you
 know - there exists a decision procedure for testing
 ambiguity.  More precisely, whenever the library can build
 the parse table, the grammar must be non-ambigious.  As the
 parse table construction is lazy, this covers only the
 productions exercised in that particular run, which is why I
 said that you need a file involving all grammar constructs
 of the language.  Nothing magic here.

Wow.  Clearly I haven't spent enough time looking at your parser
systems.  I apologize for my incorrect assumptions and statements.

Carl Witty

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: Happy and Macros (was Re: ANNOUNCE: Happy 1.10 released)

2001-05-14 Thread Carl R. Witty

Manuel M. T. Chakravarty [EMAIL PROTECTED] writes:

 I didn't say that this works for any kind of parser
 combinator, I merely said that it works Doitse's and mine.
 Both implement SLL(1) parsers for which - as I am sure, you
 know - there exists a decision procedure for testing
 ambiguity.  More precisely, whenever the library can build
 the parse table, the grammar must be non-ambigious.  As the
 parse table construction is lazy, this covers only the
 productions exercised in that particular run, which is why I
 said that you need a file involving all grammar constructs
 of the language.  Nothing magic here.

Wow.  Clearly I haven't spent enough time looking at your parser
systems.  I apologize for my incorrect assumptions and statements.

Carl Witty

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Happy and Macros (was Re: ANNOUNCE: Happy 1.10 released)

2001-05-11 Thread Carl R. Witty

Manuel M. T. Chakravarty [EMAIL PROTECTED] writes:

 I don't think, the point is the test for non-ambiguity.  At
 least, Doitse's and my self-optimising parser combinator
 library will detect that a grammar is ambigious when you
 parse a sentence involving the ambiguous productions.  So,
 you can check that by parsing a file involving all grammar
 constructs of the language.

Sorry, doesn't work.  Where do you get this file involving all
grammar constructs of the language?

If such an approach worked, you could use it to determine whether an
arbitrary context-free grammar was ambiguous; but this problem is
undecidable.

Carl Witty

___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: Happy and Macros (was Re: ANNOUNCE: Happy 1.10 released)

2001-05-11 Thread Carl R. Witty

Manuel M. T. Chakravarty [EMAIL PROTECTED] writes:

 I don't think, the point is the test for non-ambiguity.  At
 least, Doitse's and my self-optimising parser combinator
 library will detect that a grammar is ambigious when you
 parse a sentence involving the ambiguous productions.  So,
 you can check that by parsing a file involving all grammar
 constructs of the language.

Sorry, doesn't work.  Where do you get this file involving all
grammar constructs of the language?

If such an approach worked, you could use it to determine whether an
arbitrary context-free grammar was ambiguous; but this problem is
undecidable.

Carl Witty

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Happy and Macros (was Re: ANNOUNCE: Happy 1.10 released)

2001-05-10 Thread Carl R. Witty

S. Alexander Jacobson [EMAIL PROTECTED] writes:

 I am not a parsing expert, but given the recent discussion on macros, I
 have to ask: why use happy rather than monadic parsing?  Monadic parsing
 allows you to avoid a whole additional language/compilation step and work
 in Hugs (where you don't have a makefile).  What does Happy buy you here?

I've used Happy instead of parser combinators because I wanted the
additional global error-checking.  I believe that the standard
implementations of parser combinators will allow you to specify an
ambiguous grammar, and return one of the multiple possible parses,
without warning.

Carl Witty

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: type class

2000-10-02 Thread Carl R. Witty

Simon Peyton-Jones [EMAIL PROTECTED] writes:

 | How about extending TC with a branch for abstraction:
 | 
 | TC ::= ...
 |  | /\a. TC  -- abstraction
 | 
 | This is too powerful and will get out of control -- we surely 
 | don't want to give TC the full power of lambda-calculus.  So let's impose
 a
 | restriction: in /\a.TC, a must occur free in TC *exactly once*.  This
 | way, abstraction can only be used to specify with respect to which
 | argument a partial application is.  (or I think so -- I  haven't tried to
 | prove it.)
 
 That's an interesting idea that I've not seen suggested before.
 
 Someone should study it!

I wonder if the linear logic folks have studied this form of lambda
calculus?

Carl Witty




Re: How to force evaluation entirely?

2000-09-26 Thread Carl R. Witty

John Hughes [EMAIL PROTECTED] writes:

 As far as the power of the optimizer is concerned, my guess is programmers
 very rarely write x==x (unless they MEAN to force x!), so the loss of
 optimization doesn't matter. Of course, in principle, an optimizer *could*
 replace x==x by x`seq`True (if x is known to be of base type), and the x`seq`
 might well be removed by later transformations (if x can be shown to be
 defined, something compilers do analyses to discover). Who knows, maybe this
 happens in the innards of ghc...

Or the compiler could internally create its own HyperStrict class and
replace x==x by x`hyperSeq`True, if all the Eq instances involved in
the type of x are known to be reflexive (which is the case if they
were all automatically derived). :-)

Carl Witty




Re: Extensible data types?

2000-09-25 Thread Carl R. Witty

Jose Romildo Malaquias [EMAIL PROTECTED] writes:

 Hello.
 
 Is there any Haskell implementation that supports
 extensible data types, in which new value constructors
 can be added to a previously declared data type,
 like
 
   data Fn = Sum | Pro | Pow
   ...
   extend data Fn = Sin | Cos | Tan
 
 where first the Fn datatype had only three values,
 (Sum, Pro and Pow) but later it was extended with
 three new values (Sin, Cos and Tan)?


I don't know of any implementation that directly supports this.
Somebody else pointed you at TREX.  The theoretical underpinnings of
TREX are discussed in the following paper:

A Polymorphic Type System for Extensible Records and Variants,
Benedict R. Gaster and Mark P. Jones, Technical report NOTTCS-TR-96-3,
November 1996, Department of Computer Science, University of
Nottingham, University Park, Nottingham NG7 2RD, England.

(you can download a copy from http://www.cse.ogi.edu/~mpj/pubs.html).
As mentioned in the title of the paper, the type system described
supports both extensible records and extensible variants.  The TREX
extension only implemented the "extensible records" portion.  I've
often wished that the "extensible variants" were also implemented;
they seem more useful than extensible records to me.

Carl Witty




Re: monadic source of randomness

2000-08-09 Thread Carl R. Witty

Norman Ramsey [EMAIL PROTECTED] writes:

 Does anybody know of work using monads to encapsulate a source of 
 random numbers?  A quick web search suggested Haskell 98 did not take
 this path.  I'd be curious for any insights why, or any suggestions
 about a `randomness monad'.

My guess as to why Haskell 98 does not provide a stand-alone
"randomness monad" is that monads are annoying (impossible in general)
to combine.  If you want your own randomness monad, it's pretty easy
to put a random state in a state monad; that approach means that you
can easily add other chunks of state as wanted.

Carl Witty




Re: Classes

2000-08-03 Thread Carl R. Witty

[EMAIL PROTECTED] (Carl R. Witty) writes:

 "Claus Reinke" [EMAIL PROTECTED] writes:
 
  Fergus (and others): how about compiling a summary of the 
  relationships (a kind of dictionary of terminologies) ? In 
  particular, what is the state of the art in logic programming 
  wrt determinism and termination analysis? Can you 
  recommend any recent surveys, or can anyone offhand
  propose "nicer" constraints to ensure termination and 
  determinism (of executions, not results), compared to 
  those currently used for type classes?
 
 My former thesis adviser, David McAllester, did some work on efficient
 (and hence terminating) execution of logic programs that may be
 relevant here.  You might be interested in
   http://www.research.att.com/~dmac/kr92.ps
 (and several of his other papers).
 
 There are also simpler methods for determining termination.  One
 simple constraint which proves termination is that every proper
 subterm of an "antecedent" must also be a proper subterm of the
 "conclusion".  An instance declaration like
 
 (P (F [a]) (G [(a, b)]) (H a [b]), Q b a) = (R (H (F [a]) [(G [(a, b)])]) (H a 
[b]) c)
 
 would thus be acceptable.
 
 Of course, this does not prove determinism.  I think that is probably
 the harder of the two problems.

I have been thinking about how to check determinism of a set of
inference rules (check whether every provable statement has at most
one proof).  I have come up with a proof that checking determinism for
inference rules of the above form (where every proper subterm of an
antecedent is a proper subterm of the conclusion) is undecidable.  Let
me know if you want the proof; it's somewhat long.

You can imagine checking determinism either where the type class
instance is defined, or where the type class is used.  According to
the above result, if you insist on checking determinism at instance
definition time, you'll have to restrict the class of allowed
instances more than my suggestion does.  However, if you delay
flagging ambiguity errors until the ambiguous case is actually used,
that can still be efficiently checked.

Carl Witty




Re: Classes

2000-07-31 Thread Carl R. Witty

"Claus Reinke" [EMAIL PROTECTED] writes:

 Fergus (and others): how about compiling a summary of the 
 relationships (a kind of dictionary of terminologies) ? In 
 particular, what is the state of the art in logic programming 
 wrt determinism and termination analysis? Can you 
 recommend any recent surveys, or can anyone offhand
 propose "nicer" constraints to ensure termination and 
 determinism (of executions, not results), compared to 
 those currently used for type classes?

My former thesis adviser, David McAllester, did some work on efficient
(and hence terminating) execution of logic programs that may be
relevant here.  You might be interested in
http://www.research.att.com/~dmac/kr92.ps
(and several of his other papers).

There are also simpler methods for determining termination.  One
simple constraint which proves termination is that every proper
subterm of an "antecedent" must also be a proper subterm of the
"conclusion".  An instance declaration like

(P (F [a]) (G [(a, b)]) (H a [b]), Q b a) = (R (H (F [a]) [(G [(a, b)])]) (H a 
[b]) c)

would thus be acceptable.

Of course, this does not prove determinism.  I think that is probably
the harder of the two problems.

Carl Witty




Re: Precision problem

2000-07-18 Thread Carl R. Witty

Fergus Henderson [EMAIL PROTECTED] writes:

 Yes, but for any given Haskell program execution, the sum of any two
 floating-point values should be the same every time you compute it.
 In general it need not be the same as the sum of the equivalent real
 numbers, because floating point numbers are subject to rounding,
 overflow, etc., and of course it might vary from platform to platform,
 or from compiler to compiler, or perhaps even from run to run;
 but nevertheless, Haskell or any other language which aims to be
 referentially transparent, for any given program execution the sum
 should be the same each time in that program execution.

I haven't verified it, but I expect that on x86, the following code if
compiled by gcc can print "Oops!  divided by zero".  I agree with
Fergus that this behavior is undesirable and should be avoided for
Haskell, unless a flag like "-funsafe-fast-math" is provided.

double inverse(double val) {
  if (val == 0) {
printf("Oops!  divided by zero\n");
abort();
  } else {
return 1/val;
  }
}
...
  double a, b;
...
  if (a-b == 0) {
inv = 1;
  } else {
inv = inverse(a-b);
  }
...

Carl Witty




Re: mode in functions

2000-06-01 Thread Carl R. Witty

Simon Raahauge DeSantis [EMAIL PROTECTED] writes:

 It seems to me that mode flags only really make sense when we're combining
 modes. To continue the tar example it might be a bit much to have
 extractVerbosePreserve, extractPreserve etc etc. This is also done in C by
 |'ing 'flags' together for things like open(). So mode flags make sense in
 UNIX and C. In Haskell we combine functions and use higher order functions,
 à la sortBy. For tar probably the best would be generating a list of the
 files in the archive, including information like modification time and
 permissions and then mapping onto that the composition of funtions that have
 type FileInfo - IO FileInfo. So if you wanted to extract and preserve you'd
 do 'mapTar (extract . preserve)', adding in verbose and so on if you wanted.

Another option which preserves strong typing would be lists of
strongly-typed mode settings.  HOpenGL uses things like this in places
where the original OpenGL used bitflags.  Something like:

  tarExtract [Tar.PreservePermissions, Tar.Verbose] "foo.tar"

Carl Witty
[EMAIL PROTECTED]




bug in ghc, hugs, or green-card

2000-05-31 Thread Carl R. Witty

As you can tell from the title, I'm not sure exactly where the
responsibility for this bug lies.

The symptom: green-card (running under hugs, with runhugs) fails when
running in a project directory built with "ghc -split-objs".

The cause: Green Card does import chasing; that is, when processing a
file containing (say) "import GL_DIS", Green Card will try to find and
process a corresponding Green Card file.  It searches first for a
file named "GL_DIS"; if that fails, it will search for "GL_DIS.gc".

If this is in a directory which is being built using ghc with the
"-split-objs" option, then there will be a directory named "GL_DIS".

Consider the following program:
 module Main where
 
 main = catch (do v - readFile "/etc"; putStrLn ("Success: " ++ v))
  (\_ - putStrLn "Failed!")

When run under Hugs, this prints "Success: ".  When compiled with ghc,
this prints "Failed!".

This difference means that if Green Card is run under Hugs, and it
tries to load the Green Card file corresponding to "import GL_DIS",
and there is a directory named "GL_DIS", green-card will believe that
this is an empty Green Card file, process it (with no effect, because
it's empty; and with no warning or error, because it's present), and
subsequently fail.

So, the question is, "Should readFile on a directory throw an IO
error?"  If so, then there is a bug in Hugs; if not, there is a bug
(or at least a severe misfeature) in Green Card (and a bug in the ghc
libraries, which do throw an IO error).

I couldn't find a definitive answer to this question in the Haskell 98
report or library report (although I didn't spend too much time
looking).  My personal opinion is that it would be better if an IO
error were thrown, so that Hugs should be changed; but that Green Card
should not depend on this, so that the line:

  default_suffixes = ["","gc"]

in GreenCard.lhs should be changed to
  default_suffixes = ["gc"]
or
  default_suffixes = ["gc",""]

In the meantime, a sufficient workaround is to add "--suffix=gc" to
the green-card command line.

Carl Witty




typo in CVS users_guide/4-07-notes.sgml

2000-05-30 Thread Carl R. Witty

The file 4-07-notes.sgml in the latest CVS mentions 
http://www.cse.ogi.ed/~jlewis/implicit.ps.gz

That should be .edu, not .ed .

Carl Witty
[EMAIL PROTECTED]




typo in CVS hslibs/lang/doc/IArray.sgml

2000-05-30 Thread Carl R. Witty

The file IArray.sgml in the latest CVS says "additiona" instead of
"addition".

Carl Witty
[EMAIL PROTECTED]




Re: Fw: more detailed explanation about forall in Haskell

2000-05-16 Thread Carl R. Witty

"Jan Brosius" [EMAIL PROTECTED] writes:

  SORRY,  this is quite TRUE , in fact  [forall x. alpha(x)]  = alpha(x)
 
  the above true equivalence seems to be easily considered as wrong . Why?
  Because alpha(x)  is TRUE can be read as  alpha(x) is TRUE for ANY x.
 
  (Is there something wrong with the education of a computer scientist?)

Jan, could you tell us whether you think the following statements are
true or false?  

Let prime(x) mean "x is a prime number".

1. [forall x. prime(x)] = prime(x)
2. forall x.([forall x. prime(x)] = prime(x))
3. forall y.([forall x. prime(x)] = prime(y))
4. [forall x. prime(x)] = prime(y)
5. [forall x. prime(x)] = prime(2)
6. [forall x. prime(x)] = prime(2)
7. prime(2) = [forall x. prime(x)]
8. prime(2)
9. [forall x. prime(x)] = prime(4)
10. [forall x. prime(x)] = prime(4)
11. prime(4) = [forall x. prime(x)]
12. prime(4)
13. forall x. prime(x)
14. prime(x)
15. Statement 1 above means the same thing as statement 2 above.
16. Statement 2 above means the same thing as statement 3 above.
17. Statement 3 above means the same thing as statement 4 above.
18. Statement 5 above is a substitution instance of statement 3;
thus, if statement 3 were true, statement 5 would be true.
19. Statement 13 above means the same thing as statement 14 above.

If we follow the convention that free variables are to be considered
implicitly universally quantified, my vote is that statements 6, 8, 9,
10, 11, 15, 16, 17, 18, and 19 are true; the rest are false.

Carl Witty




Re: speed and size of compiled Haskell code

2000-03-17 Thread Carl R. Witty

Fergus Henderson [EMAIL PROTECTED] writes:

 On 16-Mar-2000, Jan Brosius [EMAIL PROTECTED] wrote:
  I wonder if someone could tell me more about the speed and size of compiled
  Haskell code.
 ...
  What about Haskell 98 versus (I anticipate) Haskell 2
 
 There should be no significant differences as far as performance goes
 between Haskell 98 and whatever the next revision of Haskell is called.

Well, it's possible that the next revision of Haskell could be changed
to allow for better performance; for instance, it could include GHC's
unboxed type extension, or a standard packed strings library.

Carl Witty
[EMAIL PROTECTED]



Re: overlapping instances

2000-02-07 Thread Carl R. Witty

"Jeffrey R. Lewis" [EMAIL PROTECTED] writes:

 Marcin 'Qrczak' Kowalczyk wrote:
  Parts of context reduction must be deferred, contexts must be left
  more complex, which as I understand leads to worse code - only to
  make overlapping instances behave consistently, even where they are
  not actually used.
...
 When you say:  `even where they are not actually used', I'm not sure what you
 mean.  The deferred reduction only happens on classes with overlap.  Classes
 without overlap will be `eagerly' reduced.

How can that work?  Given separate compilation, you don't know whether
a class has overlapping instances or not, do you?

Carl Witty
[EMAIL PROTECTED]



Re: A Haskell-Shell

1999-08-23 Thread Carl R. Witty

Heribert Schuetz [EMAIL PROTECTED] writes:

 Hi,
 
 The appended patch to Hugs98 (to be applied in the src subdirectory)
 might be of some help for those who want to do shell scripting in
 Haskell. It modifies IO.openFile as follows:
 
 - If the name of a file opened in ReadMode ends in "|", then the part
   before the "|" is considered a program and its standard output is
   read.
 
 - If the name of a file opened in WriteMode begins with "|", then the
   part after the "|" is considered a program and it is written to its
   standard input.
 
 Several Unix programs have such a behaviour.

I'd recommend against this; it's a potential source of nasty security
holes.  (Suppose somebody uses this version of Hugs to write a system
utility...something like "grep", say.  And then does "cd /tmp; mygrep
whatever *".  And suppose somebody else has created a file named
"/tmp/rm -rf ..|".)

A new function, openFilePipe say, with security warnings in the
documentation, would be better.

Carl Witty





Re: Stylistic question about Haskell optional arguments

1999-08-18 Thread Carl R. Witty

Paul Hudak [EMAIL PROTECTED] writes:

  Carl I'm afraid this doesn't work.  There are two problems:
  Carl 1) You need a constructor above:
   h1 (stringToHtml "This is a Header" (H1Args { align = Right}))
  Carl or
   H1 { align = Right, html = stringToHtml "This is a Header" }
 
 and Marko replied:
  h1 (stringToHtml "This is a Header") { align = AlignRight}
  works if h1 and stringToHtml are as follows
  stringToHtml s = Text { text = s }
  h1 h = H1 { inside = h }
 
 Actually, what I intended was this:
 
 h1 ((stringToHtml "This is a Header") {align = Right})
 
 but I left out the inner parens in my first message (sorry).  I think
 that this is the simplest, and preserves Andy's original types.

Sorry about my message; I wasn't thinking straight.  (I forgot all
about the record update syntax, and only remembered the record
construction syntax.)

Carl Witty





Re: Stylistic question about Haskell optional arguments

1999-08-17 Thread Carl R. Witty

Paul Hudak [EMAIL PROTECTED] writes:

 One alternative is to use labelled fields.  In your example, if Html
 were an algebraic datatype such as:
 
  data Html = Type1 { align = Align, ... }
| Type2 { align = Align, ... }
| ...
 
  data Align = Left | Right | Center
 
 then instead of:
 
  h1 [align "right"] (stringToHtml "This is a Header")
 
 you could write:
 
  h1 (stringToHtml "This is a Header" { align = Right})
 
 or whatever, and you don't have the problem of dangling []'s, 
 since stringToHtml would preesumably provide a default allignment,
 and it is legal to have the same label in different constructors.

I'm afraid this doesn't work.  There are two problems:

1) You need a constructor above:

 h1 (stringToHtml "This is a Header" (H1Args { align = Right}))

or

 H1 { align = Right, html = stringToHtml "This is a Header" }

2) Missing fields in a labeled field constructor are initialized to
_|_ (bottom).  Thus, there's no safe way (in standard Haskell) to
differentiate between

 H1 { align = Right, html = stringToHtml "This is a Header" }

and

 H1 { html = stringToHtml "This is a Header" }

Attempts to extract the "align" field and do something with it in the
latter case will result in a run-time error.

Future versions of Haskell could address this using exception
handling, or by providing a default value for missing labels in
labeled field constructions.

Carl Witty





Re: diagonalization

1999-07-20 Thread Carl R. Witty

Hans Aberg [EMAIL PROTECTED] writes:

 This is in fact one of the more easy questions: One defines a list l on a
 set A to be a map l: [0, x) - A on a semi-open interval [0, x), where x is
 an ordinal, and 0 is the first (smallest) ordinal. Then the set of all
 lists have type list ([A] in Haskell), just as before: one simply extends
 the possibility of infinities of lists.

For these new lists to be computationally useful, you need new
operations (more than the "check for empty list", "take the head", and
"take the tail" provided by Haskell).  What new operations would you
suggest providing?  How would they be implemented?  (Would they
involve increasing the size of an evaluated list object, to add more
pointers?)

Carl Witty
[EMAIL PROTECTED]





Re: diagonalization

1999-07-19 Thread Carl R. Witty

Hans Aberg [EMAIL PROTECTED] writes:

 I think that the original problem is due to the fact that Haskell does not
 know how to handle ordinals properly:
 
 Let S be the set of countable finite ordinals; if w = \omega is the first
 countably infinite ordinal and N the set of natural numbers, then S can be
 identified with N[w], the set of polynomials with coefficients in the set
 of natural numbers. The set S is well ordered. If A is a set, then an at
 most countably infinite list l with values in A can be viewed as a map l: I
 - A, where I is an interval of S containing 0 (the smallest element of S).
 
 Now look at
 l = [(x, y) | x - [0..], y - [0..] ]
 which in Hugs produces the output
 [(0,0),(0,1),(0,2),(0,3),(0,4) 
 without ever terminating or exhausting the first variable. But if lists are
 defined as merely maps N - A, then this object is not even well defined as
 a list: we have failed to set up a proper function defining the list.
 
 However, using the definition using ordinals above, we can define l as the map
a + b w |- (a, b)
 which is clearly well defined. One can similarly concatenate infinite lists.
 
 So now the problem is no longer how to properly create the infinite list,
 but how to properly print it out once it has been created. That consists of
 finding a suitable projection pi: N - I subset S = N[w], and then print
 l_pi(i), i = 0,1,2, ...

Well, the problem is not just printing it out, but doing anything with
the list.  In your example 
   l = [(x, y) | x - [0..], y - [0..] ]
or
   a + b w |- (a, b)
do you want l to have the type [(Integer,Integer)] ?  If not, what
type should it have?

If l does have type [(Integer,Integer)], then it is essentially
equivalent to
   l' = [(0, y) | y - [0..] ]
The only operations available in Haskell on lists are check for empty
list, take the head, and take the tail; no number of applications of
these operations can distinguish l and l'.

 But point is that the different possible choices of projection pi does not
 affect the mathematical object l.

Is there any point in having different mathematical objects l and l'
when no Haskell program could distinguish them?

Carl Witty
[EMAIL PROTECTED]





tiny bug in docs/libraries/Weak.sgml

1999-05-05 Thread Carl R. Witty

Whoever did the global search-and-replace of "finalise" by "finalize"
missed the word form "finalisation".

Carl Witty
[EMAIL PROTECTED]



Re: Haskell 2 -- Dependent types?

1999-02-28 Thread Carl R. Witty

Lennart Augustsson [EMAIL PROTECTED] writes:

(I believe that there are type
  theories with dependent types, such as the one in Thompson's _Type
  Theory and Functional Programming_, where each term has at most one
  type; so it can't just be dependent types that disallow principal
  types.)
 The more I think about this, the less I believe it. :-)
 I don't think each term can have a at most one type (unless all
 terms have a type annotation, in which case it is trivial).

Sorry; I wasn't thinking clearly.  You're quite right; Thompson's
theory does achieve its at-most-one type property by putting lots of
type information in the terms (although not quite as bad as an
annotation on each subterm).

Carl Witty
[EMAIL PROTECTED]





Re: Haskell 2 -- Dependent types?

1999-02-28 Thread Carl R. Witty

Fergus Henderson [EMAIL PROTECTED] writes:

  Could you give an example of language syntax that you feel would be
  better than putting these properties in the type system, while still
  allowing similar compile-time checking?
 
 I already gave NU-Prolog and Eiffel as examples.
 Those languages don't provide the same kind of static
 checking as Cayenne, because they don't provide a place
 for the user to put proofs.  But extending them with
 some kind of syntax for proofs shouldn't be too hard, I think.
 Once you've got the proofs in there, I believe checking
 them at compile-time should be straight-forward.

I downloaded the NU-Prolog manual and skimmed it, but I didn't see the
features you're describing (probably because I haven't "done" Prolog
since my learn-5-languages-in-a-quarter class 12 years ago).  Could
you give me a pointer to which section of the manual I should read?

For Eiffel, as far as I can tell, what you get are assertions in
function declarations (pre- and post-conditions on the functions).  I
believe that the compile-time checking possible in Cayenne or other
hypothetical dependently-typed languages is significantly more
expressive.

Consider the Haskell function sortBy:

 sortBy :: (a - a - Ordering) - [a] - [a]

Suppose you wanted to verify that the output of sortBy was sorted
according to the supplied comparison function.  (This is half the
specification of a sorting function; the other half would be that the
output is a permutation of the input.)  In other words, suppose we
want to verify that the comparison function, when applied to every
successive pair of values in the output, returns either LT or EQ
(never GT).  This can be easily expressed as a postcondition on
sortBy.

Unfortunately, it cannot be proven; you need constraints on the
comparison function.  (For example, with the comparison function 

 compare x y = GT

no list with two or more elements could ever be sorted.)

At a minimum, for any x and y, it must not be the case that
x `cmp` y == GT
and
y `cmp` x == GT
It must be a precondition on sortBy that the comparison function
satisfies this constraint.  This can be expressed as a precondition,
if preconditions can have universally quantified formulas (so
preconditions can no longer be executable).

Now, when you define a sorting function, you want to state and verify
the above property.  Where does it go?  It's not a postcondition of
the comparison function, because it relates multiple different calls
of the comparator.  What syntax would you use to state that only some
functions of type (a - a - Ordering) are acceptable as arguments to
sortBy?  How does the compiler verify that calls to sortBy use only
acceptable comparison functions?

To me, the set of "acceptable comparison functions" feels like a type,
so "encoding" this requirement in the type system is natural.  It also
means that you don't have to think about a separate verification
compiler pass to perform the compile-time checking; it's part of type
checking, which I find conceptually clear and simple.

Carl Witty
[EMAIL PROTECTED]





Re: Haskell 2 -- Dependent types?

1999-02-25 Thread Carl R. Witty

Nick Kallen [EMAIL PROTECTED] writes:

  You cannot do this in Cayenne, there are no operations that scrutinize
  types.  They can only be built, and never examined or taken apart.
  This is a deliberate design choice.  The consequence is that type
  cannot affect the control of a program, so they cannot really influence
  the result of a program, and are thus needless at runtime.
...
 The whole idea behind dynamic types is that run-time type information can be
 inspected and manipulated.
 
 You can add dependant types to Cayenne (theoretically) just by allowing the
  ^ dynamic?
 run-time type inspection that you intentionally disallowed. In my mind,
 you'd kill two birds with one stone.

Watch out here; there may be a limit to how much run-time type
inspection it is reasonable to do in the presence of dependent types.
Suppose you're examining a type which happens to be the type of some
sorting function:
  (Ord a) = (l :: [a]) - ((l' :: [a]), sorted l l')
How much type inspection are you willing to allow on that?  How much
good will it do you?

(I made up my own syntax in the above type expression; I hope it makes
sense.)

Carl Witty
[EMAIL PROTECTED]





Re: Haskell 2 -- Dependent types?

1999-02-25 Thread Carl R. Witty

Fergus Henderson [EMAIL PROTECTED] writes:

 Certainly a language with dependent types should define exactly what
 types the type checker will infer.  But when generating code, the
 compiler ought to be able to make use of more accurate type information,
 if it has that information available, wouldn't you agree?
 I think it would be most unfortunate if simply adding more accurate type
 information could change a program's behaviour.

I'm not sure if that last sentence refers to 1) the compiler inferring
more accurate type information or 2) the user adding a more accurate
type declaration.

1) There's a word for "optimizers" that change the meaning of a
program (in ways not allowed by the language spec); that word is
"buggy".

2) Yes, I agree that the possibility that user-supplied type
declarations can change the meaning of the program is a strike against
the idea.

Carl Witty
[EMAIL PROTECTED]





Re: Haskell 2 -- Dependent types?

1999-02-25 Thread Carl R. Witty


[Resend - mlist trouble; apologies if you've already
 received it.  -moderator]

Lennart Augustsson [EMAIL PROTECTED] writes:

  2) Yes, I agree that the possibility that user-supplied type
  declarations can change the meaning of the program is a strike against
  the idea.
 I don't find that so strange.  If there are no principal types
 (which we can't hope for), then user added signatures can
 have the effect of changing the meaning of a program.

I've lost track of what we're talking about here.  In what system can
we not hope for principal types?  (I believe that there are type
theories with dependent types, such as the one in Thompson's _Type
Theory and Functional Programming_, where each term has at most one
type; so it can't just be dependent types that disallow principal
types.)

 BTW, Haskell already has this property.  There are programs that
 yield different results depending on if you have a type signature
 or not (and it's not because of the numeric defaulting).

Could you give an example?  I can't think how to construct a 
Haskell 98 program with this property (unless you count "compiling"
and "failing to compile" as different results).

Carl Witty
[EMAIL PROTECTED]





Re: Haskell 2 -- Dependent types?

1999-02-25 Thread Carl R. Witty

"Nick Kallen" [EMAIL PROTECTED] writes:

If this is true, then what I'm doing is horrible. But I don't
  see how this
leads to nondeterminism or broken referential transparency.
  min2 returns the
same value for the same list, but it's simply more efficient
  if we happen to
know some more information about the list.
  
   In this particular case that happens to be so.  But it's not true in
   general.  What if the body of min2 were defined so that it returned
   something different in the two cases?  Your code has no proof that the
   code for the two cases is equivalent.  If it's not, then the behaviour
   would depend on whether the compiler could deduce that a particular
   argument had type Sorted or not.  And that in turn could depend on the
   amount of inlining etc. that the compiler does.
 
 I'm not asking the compiler to deduce anything. I'm talking about run-time
 type matching; this is dynamic types!

Ouch.  I hadn't realized this, although I should have.

I think you're going to run into severe efficiency/implementation
problems if you try to implement an approach like this.  As far as I
can tell, you need to store a potentially very large set of type
information with every object (every list cell, etc.), and you need to
figure out ways to efficiently create and query this type
information.  Sounds tough.

Carl Witty
[EMAIL PROTECTED]





Re: Haskell 2 -- Dependent types?

1999-02-24 Thread Carl R. Witty

[EMAIL PROTECTED] writes:

 [EMAIL PROTECTED] writes:
 
  enabling types to express all properties you want is, IMO, the right way.
 
 Why do I feel that there must be another approach to programming?
 
 How many people do you expect to program in Haskell once you are done adding all
 it takes to "express all imaginable properties through types"? What kind of 
 baroque monster will it be? Is type really _the_ medium for everything?

I would certainly love to program in a language that let me specify
that a sorting function really did sort.  Also, I am confident that if
done right, a dependent type system could be added on to Haskell such
that all existing Haskell programs would continue to be valid.

I see a couple of reasons to "enable types to express all properties
you want".

1) I've been following efforts in the theorem proving/proof
verification community which are based on this idea.  Several
type-theory based verification systems are based directly on
expressing properties in types (e.g. Coq, LEGO, the Alf* family,
NuPRL).  Others (PVS and Ontic) gain a lot of mileage out of a very
expressive type system.  (My apologies for any relevant systems I've
left out here.)  Based on these efforts, it seems very natural to me
to extend this idea to a real, usable programming language.

2) Type checking has been widely studied and is pretty well
understood.  It makes sense to take this base and use it to make
languages even more powerful and expressive.

Carl Witty
[EMAIL PROTECTED]





Re: Haskell 2 -- Dependent types?

1999-02-16 Thread Carl R. Witty

Lars Lundgren [EMAIL PROTECTED] writes:

 We have already accepted undecidable type checking, so why not take a
 big step forward, and gain expressive power of a new magnitude, by
 extending the type system to allow dependent types.

Wait a minute...who has accepted undecidable type checking?  Are you
talking about the new type class features in GHC?  As far as I know,
those are explicitly documented as experimental, and must be enabled
by a command-line option.  I'm not sure that anybody has "accepted"
undecidable type checking.

Carl Witty
[EMAIL PROTECTED]





Re: syntactic sugar for arrows

1999-01-29 Thread Carl R. Witty

Ross Paterson [EMAIL PROTECTED] writes:

 Time to ditch all those dusty old monads and upgrade to arrows.
 However the point-free style of that paper won't appeal to everyone.
 I've placed a proposal for a Haskell extension with a do-notation-style
 syntax for arrows at
 
   http://www.soi.city.ac.uk/~ross/notes/ArrowsInHaskell.html
 
 Comments welcome.

I'd like to point out one thing about the implementability of your new
syntax.

Implementing the layout rule in Haskell requires a tricky dance of
passing information back and forth between the lexer and parser.  One
thing that can ease the implementation is that the lexer can know when
a layout group starts, using the rule that the next token after a
'let', 'do', 'where', or 'of' (did I miss any?) starts layout, unless
it's an open brace.  That is, the layout openers are all reserved
words, and are always layout openers wherever they appear.

Your syntax breaks this, since it makes '-' be a layout opener in
some but not all places it can appear.

Carl Witty
[EMAIL PROTECTED]






Re: kind mismatch ($)

1999-01-04 Thread Carl R. Witty

Simon Peyton-Jones [EMAIL PROTECTED] writes:

  One additional comment. I frequently use quotes (') as suffixes
  for identifier and type names. As can be seen above ghc loves adding
  quotes (` and ') around parts of its messages. This becomes *very*
  confusing when the last part of the message consists of an
  identifier suffixed by a one or more quote symbols.
 
 I've noticed this too, but the trouble is that if you leave the quotes
 out then identifiers tend to get lost in the text.  Better ideas welcome.

Well, you could always use different quoting characters.  Rather than
`Int'', you could use any of:

"Int'"
(Int')
[Int']
{Int'}
 Int' 
` Int' '

etc.

Carl Witty
[EMAIL PROTECTED]



Re: Reduction count as efficiency measure?

1998-11-25 Thread Carl R. Witty

Keith Wansbrough [EMAIL PROTECTED] writes:

 So while Hugs gives you a reduction count (or even a millisecond
 duration), this is essentially meaningless: in a real application
 you would compile the code with an optimising compiler.  The effect
 this can have on your execution time can easily be more than merely
 a constant factor: it can change the order of your algorithm.

Is this true in practice?  That is, are there programs which have
different asymptotic running times when compiled under ghc or hbc than
when running under Hugs?

It would actually surprise me if there were; I'm having a hard time
imagining a realistic optimization that would do this.  (Which could
easily be a failure of my imagination.)

Carl Witty
[EMAIL PROTECTED]





Re: Reduction count as efficiency measure?

1998-11-25 Thread Carl R. Witty

Ralf Hinze [EMAIL PROTECTED] writes:

 | Is this true in practice?  That is, are there programs which have
 | different asymptotic running times when compiled under ghc or hbc than
 | when running under Hugs?
 | 
 | It would actually surprise me if there were; I'm having a hard time
 | imagining a realistic optimization that would do this.  (Which could
 | easily be a failure of my imagination.)
 
 What about common subexpression elimination? AFAIK neither Hugs nor GHC
 nor HBC implement CSE, but if they did the asymptotic running time
 could sometimes radically change. 

OK, so my imagination is severely lacking. :-)

I'm still curious about my first question, though, about the specific
optimizations included in ghc and hbc.  If in fact they don't do CSE,
are there optimizations which they do perform which would change the
asymptotic running time?

Carl Witty
[EMAIL PROTECTED]





Re: Int vs Integer

1998-09-11 Thread Carl R. Witty

Sigbjorn Finne [EMAIL PROTECTED] writes:

  This wants to add two 1-word numbers in a fast, unrolled loop.
  It sets up various registers (size, and pointers to
  source1, source2, and destination).
  It computes the number of complete times to go through the
  loop (0).
  It computes the location to jump into the middle of the loop,
  to handle the one word.
  It adjusts the various pointer registers to compensate for the
  fact that it's jumping into the middle of the loop.
  It jumps into the middle of the loop.
  It does the addition.
 
 So, unrolling the loop in hand coded assembly does speed it up, no?
 The MP_INT representation doesn't particularly accommodate the kinds
 of code optimisations I believe you're looking for, but the GMP code
 does consider 'the Int case' specially in a number of places; look at
 the code for divmod, for instance.

No, unrolling does not speed up this case.  For the case of addition
of Int-sized numbers, a "rolled" (i.e., not unrolled; is there a word
for this?) loop would be a fair bit faster; the above would reduce to:

  It sets up various registers (size, and pointers to
  source1, source2, and destination).
  It does the addition.

I looked at the code for div; yes, it does handle Int-sized divisors,
but doesn't do anything special for Int-sized dividends, so you still
have annoying loops, etc. for the Int-sized case, as well as several
function calls.  I bet that just adding a conditional:

   if (abs_usize == 1  abs_vsize == 1) {
   /* do special-case code */
   }

at the beginning of each of the mpz_*() functions would speed up
Int-sized operations by an order of magnitude, at the cost of a few
percent slowdown on operations that were bigger than an Int.

 Clearly there's a range of optimisations possible here; using an
 efficiently implemented bignum library being one (not irrelevant) way
 to improve matters. Changing the representation of Integer is another,
 distinguishing at the Haskell level between ones that fit in a machine
 word and those that don't. We plan to use such a representation for
 GHC/Hugs.

I'm glad to hear it; do you have anything written on exactly how you
this will work?

 It'll be interesting to see the bottom line NoFib figures with this
 change of representation in place.

Yes, it will.

Carl Witty
[EMAIL PROTECTED]





Re: ghc-2.10 fails on Red Hat Linux 5.0?

1998-05-27 Thread Carl R. Witty

Antony Bowers [EMAIL PROTECTED] writes:

 Does ghc (any version) work on Linux with glibc-2 (libc6)?

It works for me.

I installed the 2.10 binary release and used it to compile 3.01 from
source; both 2.10 (which is linked with libc5) and 3.01 (which is
linked with libc6) can compile the following simple program:

 main = putStr "Hello, world!\n"

to get a result which is linked with libc6 and works correctly.

I'm using gcc 2.7.2.3 and a version of libc6 which claims to be GNU
libc 2.0.7pre1; I'm working on a Debian system which is running a
fairly current version of the soon-to-be-released Debian 2.0.

(I had a little bit of trouble getting GHC itself to compile under
libc6; I'm working with Simon Marlow to get ghc-current to compile.)

Carl Witty
[EMAIL PROTECTED]



problems compiling ghc 3.01 for linux

1998-05-11 Thread Carl R. Witty

I ran into minor problems compiling GHC 3.01 on my up-to-date Debian
Linux machine.  I'm pretty sure that the problem is that I'm using
Libc 6 (GNU libc 2).

Basically, several BSD extensions (in particular, the types caddr_t
and u_long, and the tm_zone and tm_gmtoff members of struct tm) are
not available when _POSIX_SOURCE is defined; I also had to define
_BSD_SOURCE to include them.  (One problem is that configure tests for
feature availability with no _*_SOURCE defines; defining _POSIX_SOURCE
then disables features that configure detected as present.)

Here's the patch I made:

--- stgdefs.h~  Sun Oct  5 13:34:00 1997
+++ stgdefs.h   Sun May 10 15:52:25 1998
@@ -53,10 +53,12 @@
 #ifdef NON_POSIX_SOURCE
 #undef _POSIX_SOURCE
 #undef _POSIX_C_SOURCE
+#define _BSD_SOURCE
 #else
 # ifndef aix_TARGET_OS
 /* already defined on aix */
 #define _POSIX_SOURCE 1
+#define _BSD_SOURCE 1
 # endif
 #ifndef irix_TARGET_OS
 #define _POSIX_C_SOURCE 199301L


While this works for me, I wouldn't suggest that you apply it blindly;
I don't know if it might break other places.  Ideally, it would be
conditioned on having GNU libc 2.0 (Linux libc 6); I don't know how to
check for that here.

Random information about my system:
Linux gemini 2.0.29 #4 Thu Oct 23 00:34:55 PDT 1997 i686 unknown
gcc version 2.7.2.3

Let me know if you have any questions, or want me to test patches.

Carl Witty
[EMAIL PROTECTED]



Re: problems compiling ghc 3.01 for linux

1998-05-11 Thread Carl R. Witty

Simon Marlow [EMAIL PROTECTED] writes:

 Thanks Carl.  Several people have run into this before (check the list
 archives), but we still don't have any recent Linux installations here
 to test out a proper fix on.  

That's why I said:

 Let me know if you have any questions, or want me to test patches.

(I am keeping up with ghc-current, using anonymous CVS, so if you just
check in a proposed fix and let me know I can tell you if it
compiles.)

Carl Witty
[EMAIL PROTECTED]



Re: quicksort and compiler optimization

1998-05-10 Thread Carl R. Witty

Mariano Suarez Alvarez [EMAIL PROTECTED] writes:

 qsort can be rewritten (by the compiler, ideally...) so that the list is
 traverse once, without losing any laziness:
 
  infix 5 #
  infix 6 ?:
 
 Define
 
  qsort [] = []
  qsort (x:xs) = let (a,b) = foldr (\y - (y ?: (x) # y ?: (=x))) ([],[]) xs
 in qsort a ++ [x] ++ qsort b
 
 where (#) is cartesian product of functions
 
  f # g  = \(x,y) - (f x,g y)
 
 and ?: is a conditional cons
 
  x ?: p = if p x then (x:) else id
 
 This definition of qsort is equivalent (on reasonable lists, ie finite
 with no bottoms inside) to the original
 
  qsort' [] = []
  qsort' (x:xs) = qsort' [y | y-xs, yx] ++ [x] ++ qsort' [y | y-xs, y=x]
 
 The proof is easy.

Yes, I think this is sound, but is it an optimization?  Consider

 (hd:tl) = qsort [0 .. 100]

I believe that evaluating hd will do enough evaluations that tl will
point to something like:
  qsort (
(1 ?: (= 0)) $
(2 ?: (= 0)) $
(3 ?: (= 0)) $
...
(100 ?: (= 0)) $
[]
  )

With
 (hd':tl') = qsort' [0 .. 100]

I believe that evaluating hd' will make tl' point to something like:
  qsort' [y | y - [1, 2, 3, ..., 100], y = 0]

This discussion began as a way to keep xs (from the definition of
qsort' above) from being live in this situation.  The definition of
qsort above does this--possibly (if xs were not shared elsewhere)
freeing up some cons cells--but only at the cost of introducing some
much larger data structures.  Note that all the elements of xs are
still live in both cases.  If we're to judge the above two quicksort
implementations on how much data is live during intermediate stages, I
think the original qsort' wins.

Carl Witty
[EMAIL PROTECTED]





Re: Is this a bug?

1998-03-06 Thread Carl R. Witty

Marc van Dongen= [EMAIL PROTECTED] writes:

 [snip]
 
 :   module Main( main ) where
 :   import List( genericLength )
 :   main = putStr (show integral) 
 :  putStr "\n"
 :  return ()
 :where integral = genericLength []
 
 [snip]
  
 : This is a legal Haskell program. The (ambiguous) type of `integral' is
 : (Num a = a), but Haskell disambiguates numeric expressions with the
 : help of `default' declarations. As per Haskell 1.4 (see section 4.3.4
 
 Thanks for the pointer.
 
 : of the report), this means resolving `integral' to be a value of type
 : Int.
 
 I think this ``resolving'' may lead to unwanted results. It took
 me quite some time to discover that Integral was resolved to Int
 in some program I had. Is there a possibility of generating a warning
 message whenever programs like the one above have to be disambiguated?
 Maybe a compiler-switch to turn these warning messages on and off?

If you really don't like the default processing, you could use
  default ()
to disable it totally.

Carl Witty
[EMAIL PROTECTED]



Haskell 1.4 and Unicode

1997-11-07 Thread Carl R. Witty

I have some questions regarding Haskell 1.4 and Unicode.  My source
materials for these questions are "The Haskell 1.4 Report" and the
files

ftp://ftp.unicode.org/Public/2.0-Update/ReadMe-2.0.14.txt   
  and
ftp://ftp.unicode.org/Public/2.0-Update/UnicodeData-2.0.14.txt

It's possible that question 2 below would be resolved if I actually
read the Unicode book; if so, I apologize in advance.

1) I assume that layout processing occurs after Unicode preprocessing;
otherwise, you can't even find the lexemes.  If so, are all Unicode
characters assumed to be the same width?

2) The Report uses the following classes of characters:
uniWhite - any UNIcode character defined as whitespace
nonbrkspc ???
UNIsmall - any Unicode lowercase letter
UNIlarge - any uppercase or titlecase Unicode letter
UNIsymbol - Any Unicode symbol or punctuation
UNIdigit - A Unicode numberic

The file ReadMe-2.0.14.txt above defines the following classes of
characters:

Normative
Mn = Mark, Non-Spacing
Mc = Mark, Spacing Combining
Me = Mark, Enclosing

Nd = Number, Decimal Digit
Nl = Number, Letter
No = Number, Other

Zs = Separator, Space
Zl = Separator, Line
Zp = Separator, Paragraph

Cc = Other, Control
Cf = Other, Format
Cs = Other, Surrogate
Co = Other, Private Use
Cn = Other, Not Assigned

Informative
Lu = Letter, Uppercase
Ll = Letter, Lowercase
Lt = Letter, Titlecase
Lm = Letter, Modifier
Lo = Letter, Other

Pc = Punctuation, Connector
Pd = Punctuation, Dash
Ps = Punctuation, Open
Pe = Punctuation, Close
Po = Punctuation, Other

Sm = Symbol, Math
Sc = Symbol, Currency
Sk = Symbol, Modifier
So = Symbol, Other

It's not obvious how the Unicode-defined classes map onto the classes
in the Report.  My guess is:

uniWhite == classes Zs, Zl, Zp
UNIsmall == class Ll
UNIlarge == classes Lu, Lt
UNIsymbol == classes Sm, Sc, Sk, So
UNIdigit == classes Nd, Nl, No
nonbrkspc == "NO-BREAK SPACE" (\h00a0)

However, it would also seem quite reasonable to include class Lo
(which includes things like "Hebrew letter Alef") in UNIsmall or
UNIlarge; and to include some of the Punctuation classes in UNIsymbol.

3) What does it mean that Char can include any Unicode character?

If I compile and run the following program on my vanilla American UNIX
box:

main = putChar '\x2473' {- print a "circled number twenty" -}

to get a program "ctwenty", and I run

./ctwenty | od -c

(od prints out each byte of output), what will I see?

Will the following program

main = getChar = (print . fromEnum)

ever print out a number greater than 256?

If the answers to the above questions are "implementation dependent",
what are some of the behaviors that implementations might plausibly
have?

Carl Witty
[EMAIL PROTECTED]






small wart in the Report's description of the layout rule

1997-11-07 Thread Carl R. Witty

The Haskell Report says:

To facilitate the use of layout at the top level of a module (an
implementation may allow several modules may reside in one file), the
keyword module and the end-of-file token are assumed to occur in
column 0 (whereas normally the first column is 1). Otherwise, all
top-level declarations would have to be indented.

I've read this many times without thinking about it; however, once I
thought about it, it doesn't make sense.  Following a module, the
keyword "module" is "an illegal lexeme...encountered at a point where
a close brace would be legal"; therefore, the close brace is properly
inserted no matter what column "module" occurs in.  Therefore, I
suggest that the above paragraph be removed from the Report.

Carl Witty
[EMAIL PROTECTED]






printing the ghc source?

1997-10-01 Thread Carl R. Witty

I'm interested in learning how GHC works.  To this end, I'd like to
print out large chunks of its source and pore over them.

I would have hoped that the fact that GHC is written in the "literate
programming" style would make it easy to get high-quality printouts;
however, it doesn't seem to work.  When I run lit2latex over
ghc/compiler/*/*.lhs, I get many errors which say:

fatal flex scanner internal error--end of buffer missed
lit2latex: error(s) from prefilter pipe

and the relevant source file is ignored.

What I really want is a way to get a fairly good-looking DVI file
consisting of the entire GHC source; in an ideal world, this would
have page-numbered cross references in the margin and a good index.
(I would then print out selected pages from this huge DVI file.)  Even
without these refinements, it seems like I ought to be able to do
better than printing out the source (including the "literate"
comments) in a fixed-width font.

Carl Witty
[EMAIL PROTECTED]