Re: [Haskell-cafe] speeding up fibonacci with memoizing

2007-02-18 Thread William Lee Irwin III
On Sun, Feb 18, 2007 at 06:15:25PM -0500, [EMAIL PROTECTED] wrote:
 Now that's an industrial-strength Fibonacci.  It's O(log n) not
 including the cost of adding and multiplying large Integers, and
 uses a bounded amount of memory between calls, making it suitable
 for a library.
 The slowest part of the test program is actually the bit that prints
 the number.  So I used this driver program:

I've been here before.

http://www.haskell.org/pipermail/haskell-cafe/2005-January/008839.html


-- wli
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parity of the number of inversions of a permutation

2005-03-11 Thread William Lee Irwin III
On Wed, Mar 09, 2005 at 12:42:09PM +0100, Henning Thielemann wrote:
 I think it is a quite popular problem. I have a permutation and I want to 
 count how often a big number is left from a smaller one. More precisely 
 I'm interested in whether this number is even or odd. That's for instance 
 the criterion to decide whether Lloyd's shuffle puzzle is solvable or not.
 Example:
   1 4 3 2
 I can choose six pairs (respecting the order) of numbers out of it, namely 
 (1,4) (1,3) (1,2) (4,3) (4,2) (3,2), where in the last three pairs the 
 first member is greater than the second one. Thus I have 3 inversions and 
 an odd parity.
 I'm searching for a function which sorts the numbers and determines the 
 parity of the number of inversions. I assume that there are elegant and 
 fast algorithms for this problem (n * log n time steps), e.g. a merge sort 
 algorithm. A brute force solution with quadratic time consumption is
 countInversions :: (Ord a) = [a] - Int
 countInversions = sum . map (\(x:xs) - length (filter (x) xs)) . init . 
 tails

That's not a permutation, that's a cycle. Permutations are sets of
disjoint cycles (which commute).


-- wli
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: mathematical notation and functional programming

2005-02-04 Thread William Lee Irwin III
On Fri, Jan 28, 2005 at 08:16:59PM +0100, Henning Thielemann wrote:
 But what do you mean with 1/O(n^2) ? O(f) is defined as the set of
 functions bounded to the upper by f.  So 1/O(f) has no meaning at the
 first glance. I could interpret it as lifting (1/) to (\f x - 1 / f x)
 (i.e. lifting from scalar reciprocal to the reciprocal of a function) and
 then as lifting from a reciprocal of a function to the reciprocal of each
 function of a set. Do you mean that?

On Thu, Feb 03, 2005 at 08:16:49PM -0500, Dylan Thurston wrote:
 I think this is the only reasonable generalization from the
 established usage of, e.g., 2^(O(n)).  In practice, this means that
 1/O(n^2) is the set of functions asymptotically bounded below by
 1/kn^2 for some k.

Careful, 2^x is monotone increasing; 1/x is monotone decreasing. I
said 1/O(n^2) is Omega(1/n^2) for a good reason. Inequalities are
reversed by monotone decreasing functions. Likewise, sech(O(n^2)) =
Omega(sech(n^2)), which is happily immune to the effects of sign.
Usually f(n) = O(g(n)) is done as there exist N, K so that
|f(n)| = K*|g(n)| for all n  N so e.g.
e^x \in O((-1)^{\chi_\mathbb{Q}}\cdot e^x) etc.

Also, you're in a bit of trouble wrt. 2^(O(n)). O(2^n) is something
rather different. O(2^n) has |f(n)| = K*|2^n| but 2^(O(n)) is 2^|f(n)|
where |f(n)| = K*|n|. If K = 1/log(2) is sharp then then we have
2^|f(n)| = e^|n| \in omega(2^n).


-- wli
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] fastest Fibonacci numbers in the West

2005-01-28 Thread William Lee Irwin III
Am Donnerstag, 27. Januar 2005 06:08 schrieb William Lee Irwin III:
 Inspired by a discussion on freenode #haskell, I tried to write the
 fastest Fibonacci number function possible, i.e. given a natural
 number input n to compute F_n.
 For the moment, mlton-generated binaries crash computing fib (10^8-1),
 and there is a 6:1 speed difference for fib (10^7-1) between the two,
 where mlton-generated binaries take just under 1 minute, and ghc-
 generated binaries take just under 6 minutes.

On Thu, Jan 27, 2005 at 03:26:39PM +0100, Daniel Fischer wrote:
 Obviously, your machine is significantly faster than mine.
 On mine, fib (10^6) takes a little under two minutes, fib (10^7-1) I ^C-ed 

~/tmp/fib $(( 10**6 - 1 ))  211.91s user 1.13s system 83% cpu 4:14.96 total

In general the printed results and data structures are expected to be
in a range where asymptotic behavior dominates. 10^5-1 seems to be
the break-even point where more advanced algorithms in Haskell start
overtaking naive implementations in lighter-weight runtime systems. This
doesn't help much when usefully-advanced languages have sufficiently
lightweight runtime systems. I'd almost expect the overhead of printing
the result to dominate all this, but somehow it doesn't appear to be
the deciding factor, perhaps because they all call gmp to do it.

In the range where asymptotic behavior is expected to dominate, the
numbers are huge data structures, and having more of them simultaneously
live or using more arbitrary-precision temporaries is painful.


On Thu, Jan 27, 2005 at 03:26:39PM +0100, Daniel Fischer wrote:
 after twenty.
 I think ,more readable than
 unfoldl f x = case f x of
   Nothing - []
 divs 0 = Nothing
 divs k = Just (uncurry (flip (,)) (k `divMod` 2))
 would be
 unfoldl f x = case f x of
  Nothing - []
 divs 0 = Nothing
 divs k = Just (n `quotRem` 2)
 -- this has no influence on performance, since it's optimized anyway.

unfoldl was intended to match unfoldr's calling convention verbatim,
but produce a list in the reverse order, not really to be easy-to-use
in this specific problem.


Am Donnerstag, 27. Januar 2005 06:08 schrieb William Lee Irwin III:
 Anyway, thoughts on how to improve all this from the programmer's
 point of view, or otherwise explaining what's going on or ameliorating
 whatever effect is at work here would be appreciated.

On Thu, Jan 27, 2005 at 03:26:39PM +0100, Daniel Fischer wrote:
 I thought, I'd do it in the ring of integers in Q(sqrt(5)), code attached,
 this was a whiff faster for n = 70 on my machine, a whiff slower 
 for n = 10^6 -- any idea how that may come?

I suspect the divide-and-conquer scheme for exponentiation doesn't do
bitreversal, which eliminates various redundant computations.


-- wli
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: mathematical notation and functional programming

2005-01-28 Thread William Lee Irwin III
On Fri, Jan 28, 2005 at 08:16:59PM +0100, Henning Thielemann wrote:
 But what do you mean with 1/O(n^2) ? O(f) is defined as the set of
 functions bounded to the upper by f.  So 1/O(f) has no meaning at the
 first glance. I could interpret it as lifting (1/) to (\f x - 1 / f x)
 (i.e. lifting from scalar reciprocal to the reciprocal of a function) and
 then as lifting from a reciprocal of a function to the reciprocal of each
 function of a set. Do you mean that?

My first guess is Omega(1/n^2).


-- wli
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Hugsvs GHC (again)was: Re: Somerandomnewbiequestions

2005-01-19 Thread William Lee Irwin III
On 19 January 2005 09:45, Ben Rudiak-Gould wrote:
 Okay, my ignorance of Posix is showing again. Is it currently the
 case, then, that every GHC thread will stop running while a disk read
 is in progress in any thread? Is this true on all platforms?

On Wed, Jan 19, 2005 at 01:39:05PM -, Simon Marlow wrote:
 It's true on Unix-like systems, I believe.  Even with -threaded.  It
 might not be true on Win32.

How does forkOS fit into this picture? It's described in the
documentation as allowing concurrent execution of system calls
and other activity by other threads.


-- wli
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] convergence of functions of Complex variables

2004-12-15 Thread William Lee Irwin III
On Wed, Dec 15, 2004 at 02:07:10AM -0800, William Lee Irwin III wrote:
 This does not work as expected on Complex numbers due to some odd
 typechecking hassles apparently associated with abs. How do I get this
 to typecheck for both real (e.g. Double) and Complex arguments?

On Wed, Dec 15, 2004 at 10:28:18AM +, Ross Paterson wrote:
 abs :: Num a = a - a, whereas you want something that returns a Double.
 You could define
 class Norm a where
 norm :: a - Double
 instance Norm Float where
 norm = realToFrac . abs
 instance Norm Double where
 norm = abs
 instance RealFloat a = Norm (Complex a) where
 norm = realToFrac . magnitude
 and use norm instead of abs.

Thanks; this appears to do the trick for me. Something of this kind
would be useful to have in the std. libraries, at least for me.


-- wli
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] convergence of functions of Complex variables

2004-12-15 Thread William Lee Irwin III
This does not work as expected on Complex numbers due to some odd
typechecking hassles apparently associated with abs. How do I get this
to typecheck for both real (e.g. Double) and Complex arguments?

\begin{code}
module Jacobi (sn, cn, dn, sd, cd, nd, cs, ds, ns, sc, dc, nc) where

scd x k | abs k  1e-14 = (sin x, cos x, 1)
| otherwise = ((1+m)*s/(1+m*s^2), c*d/(1+m*s^2),
(1 - m*s^2)/(1+m*s^2))
where
k' = cos $ asin k
m = -tanh(log(k')/2)
(s, c, d) = scd (x/(1+m)) m

sn x k = let (s,_,_) = scd x k in s
cn x k = let (_,c,_) = scd x k in c
dn x k = let (_,_,d) = scd x k in d
sd x k = (sn x k)/(dn x k)
cd x k = (cn x k)/(dn x k)
nd x k = 1/(dn x k)
cs x k = (cn x k)/(sn x k)
ds x k = (dn x k)/(sn x k)
ns x k = 1/(sn x k)
sc x k = (sn x k)/(cn x k)
dc x k = (dn x k)/(cn x k)
nc x k = 1/(cn x k)
\end{code}
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Non-Academic Haskell was Re: [Haskell-cafe] Non-technical Haskell question

2004-11-30 Thread William Lee Irwin III
On Tue, Nov 30, 2004 at 04:57:27PM +0100, Shae Matijs Erisson wrote:
 The #haskell irc channel on irc.freenode.net is composed of many different
 flavors of programmer, from self-educated 16 year olds on up to post doctoral
 students studying functional programming.
 I'm a self-educated, self-employed programmer. I use Python in most of my
 paying work but would very much prefer to use Haskell.
 It seems obvious to me (but not to most of my clients :-) that the various
 powerful and expressive patterns in Haskell allow programmers to deliver more
 business value in less time than almost any other programming language.

I've been sitting around #haskell on EfNet for something like 5 years
and wondering why no one ever came by.


-- wli
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are handles garbage-collected?

2004-10-26 Thread William Lee Irwin III
Ben Rudiak-Gould [EMAIL PROTECTED] writes:
 For this to really work properly the whole OS needs to be designed
 around it. Such OSes exist -- they're called capability-based -- but
 like so many other good ideas this one hasn't generated much interest
 in industry. I think we're stuck with explicit file handle management
 for the next couple of decades.

On Tue, Oct 26, 2004 at 09:22:58PM +0200, Shae Matijs Erisson wrote:
 hOp/House doesn't have a filesystem yet.
 See http://www.cse.ogi.edu/~hallgren/House/ (source recently released)
 Maybe we won't always require explicit file handles.

It doesn't seem to have demand paging either. Hmm. Maybe I should look
at this sometime.


-- wli
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Did you come to Haskell from Python? was Re: [Haskell-cafe] Are handles garbage-collected?

2004-10-26 Thread William Lee Irwin III
Remi Turk [EMAIL PROTECTED] writes:
 At least one. (Me) And, judging from the amount of references to
 Python in these mailing-lists, I really doubt I'm the only one.

On Tue, Oct 26, 2004 at 09:21:10PM +0200, Shae Matijs Erisson wrote:
 At least two. I also came to Haskell from Python.

I made a conscious effort at some point in college to learn a variety
of programming languages. Haskell arose as a particularly nice ML
variant in this exploration. I still find it useful for expressing
mathematical things for various computations to help with things.


-- wli
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are handles garbage-collected?

2004-10-26 Thread William Lee Irwin III
On Tue, Oct 26, 2004 at 10:22:23PM +0200, Tomasz Zielonka wrote:
 My road to Haskell went from C, C++ and Perl through Ocaml, Clean and
 Erlang. I was mainly motivated by dissatisfaction with languages I used
 at the moment. I am very happy with Haskell and I think I'll be using it
 for some time.
 I still use C++ at work, mostly because of performance requirements and
 participation in multi-developer projects (multi stands for 2-3 here).
 To be honest, I am very happy with what we managed to achieve in C++,
 but I still get angry when I code some more complicated but
 less-performance-critical parts and I think how easy it would be in
 Haskell :)

All C and assembly programming for kernels and firmware at work here.


-- wli
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] strictness and the simple continued fraction

2004-10-12 Thread William Lee Irwin III
On Tue, Oct 12, 2004 at 08:48:27AM +0100, Simon Peyton-Jones wrote:
 If you are interested in arbitrary precision arithmetic using continued
 fractions, you may want to check out the work of David Lester.  And
 Peter Potts et al.  Just type exact real arithmetic into Google.

That's where I got the ContFrac.hs I started with, though I used a
different search string. =)


-- wli
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] strictness and the simple continued fraction

2004-10-11 Thread William Lee Irwin III
On Mon, Oct 11, 2004 at 09:53:16PM -0400, Scott Turner wrote:
 I tried using continued fractions in a spiffy lazy list implementation a 
 while ago. Never got them working as well as expected. 
 Evenutally I realized that calculating with lazy lists is not as
 smooth as you might expect.
 For example, the square root of 2 has a simple representation 
 as a lazy continued fraction, but if you multiply the square root of 2 by 
 itself, your result lazy list will never get anywhere.  The calculation will 
 keep trying to determine whether or not the result is less than 2, this being 
 necessary to find the first number in the representation. But every finite 
 prefix of the square root of 2 leaves uncertainty both below and above, so 
 the determination will never be made.
 Your problems may have some other basis, but I hope this helps.

I hit that one, too. That's nasty enough it may be best to give up on
the infinite case, at least. I can't think of a way to salvage all this.


-- wli
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] strictness and the simple continued fraction

2004-10-09 Thread William Lee Irwin III
So, I discovered that simple continued fractions are supposed to be
spiffy lazy lists and thought I'd bang out some continued fraction code.
But then I discovered ContFrac.hs and couldn't really better it. Of
course, I went about trying to actually do things relying on their
laziness, and discovered they weren't as lazy as I hoped they'd be.
Simple uses of approximations at the ghci command line such as:

instance Ord ContFrac where 
(ContFrac (x:xs)) `compare` (ContFrac (y:ys)) = 
case x `compare` y of
LT - LT
GT - GT
EQ - (ContFrac ys) `compare` (ContFrac xs)
(ContFrac []) `compare` cf =
case cf of
ContFrac (x:_) - 0 `compare` x
ContFrac [] - EQ
cf `compare` (ContFrac []) =
case cf of
ContFrac (x:_) - x `compare` 0
ContFrac [] - EQ

x = expCF (1/2)
where
expCF x | x  0 = recip . expCF $ negate x
| x == 0 = 1
| x == 1 = let ContFrac es = ecf
in ContFrac (take 100 es)
| otherwise = case x of
ContFrac [y] - (expCF 1)^y
ContFrac (y:ys) - if y /= 0
then ((expCF 1)^y)
*(expCF (ContFrac (0:ys)))
else (1+x+x^2/2+x^3/6+x^4/24+x^5/120)
ContFrac [] - expCF 0

where the instance was added to ContFrac.hs seems to fail to terminate,
where manually reducing things a bit appears to restore termination.
So, what hit me?


-- wli
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] strictness and the simple continued fraction

2004-10-09 Thread William Lee Irwin III
On Sat, Oct 09, 2004 at 12:33:53PM -0700, William Lee Irwin III wrote:
 So, I discovered that simple continued fractions are supposed to be
 spiffy lazy lists and thought I'd bang out some continued fraction code.
 But then I discovered ContFrac.hs and couldn't really better it. Of
 course, I went about trying to actually do things relying on their
 laziness, and discovered they weren't as lazy as I hoped they'd be.
 Simple uses of approximations at the ghci command line such as:

Of course, I instantly discover the bug once I publicly embarrass myself.
Please disregard.


-- wli
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] strictness and the simple continued fraction

2004-10-09 Thread William Lee Irwin III
On Sat, Oct 09, 2004 at 12:33:53PM -0700, William Lee Irwin III wrote:
 So, I discovered that simple continued fractions are supposed to be
 spiffy lazy lists and thought I'd bang out some continued fraction code.
 But then I discovered ContFrac.hs and couldn't really better it. Of
 course, I went about trying to actually do things relying on their
 laziness, and discovered they weren't as lazy as I hoped they'd be.
 Simple uses of approximations at the ghci command line such as:

On Sat, Oct 09, 2004 at 12:39:44PM -0700, William Lee Irwin III wrote:
 Of course, I instantly discover the bug once I publicly embarrass myself.
 Please disregard.

Gaffe #2, fixing compare didn't actually fix it. Bombs away...


-- wli
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: OCaml list sees abysmal Language Shootoutresults

2004-10-08 Thread William Lee Irwin III
On Fri, Oct 08, 2004 at 08:35:40AM -0400, Robert Dockins wrote:
 Actually, I've been wondering about this.  If my understanding is 
 correct, Haskell lists are basicly singly-linked lists of cons cells (is 
 that correct?)  A simple (I think) thing to do would be to make the 
 lists doubly-linked and circular.  That would let us do nice things like 
 have O(1) primops for reverse, tail, (++) and other things that access 
 lists at the end.  However, I'm still pretty new to FP in general, so I 
 don't know if there are any theoretical reasons why something like this 
 couldn't work.  Obviously lazy and infinite lists add some wrinkles, but 
 I think it could be worked through.
 BTW can you give some references to these known techniques?

Ugh, lousy cache properties... try rank-ordered B+ trees. There are
probably better choices than that even. It's probably best Simon point
us to references to what's actually useful here.


-- wli
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: OCaml list sees abysmal Language Shootoutresults

2004-10-08 Thread William Lee Irwin III
At some point in the past, someone wrote:
 Actually, I've been wondering about this.  If my understanding is 
 correct, Haskell lists are basicly singly-linked lists of cons cells (is 
 that correct?)  A simple (I think) thing to do would be to make the 
 lists doubly-linked and circular.  That would let us do nice things like 
 have O(1) primops for reverse, tail, (++) and other things that access 
 lists at the end.  However, I'm still pretty new to FP in general, so I 
 don't know if there are any theoretical reasons why something like this 
 couldn't work.

On Fri, Oct 08, 2004 at 02:42:28PM +0100, Keith Wansbrough wrote:
 x = [3,5,7]
 primes = 2:x
 odds = 1:x
 You can't do sharing like this if your lists are doubly-linked; lots
 of cool algorithms depend on this sharing.

That constraint makes various other things painful. I suppose there is
no one-size-fits-all solution.


-- wli
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: OCaml list sees abysmal Language Shootoutresults

2004-10-08 Thread William Lee Irwin III
William Lee Irwin III [EMAIL PROTECTED] writes:
 Ugh, lousy cache properties... try rank-ordered B+ trees. There are
 probably better choices than that even. It's probably best Simon point
 us to references to what's actually useful here.

On Fri, Oct 08, 2004 at 03:55:05PM +0200, Ketil Malde wrote:
 I'm as dumb as anybody, but it seems to me that one could read a lazy
 chain (list) of buffers as UArray Int Word8, and tack a list-type
 interface (head, tail, cons...) interface on top.

I had in mind a more general attack on things like e.g. list
concatenation, reversal, indexing (!!), and so on in addition to cache
locality, packing, and reducing space overhead of list linkage. As
Keith Wansbrough mentioned in another post, these kinds of arrangements
tend to spoil other useful properties of the more naive-looking lists.


-- wli
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: OCaml list sees abysmalLanguage Shootoutresults

2004-10-08 Thread William Lee Irwin III
At some point in the past, someone's attribution was stripped from:
 It can't be transparent. A different type for semi-packed strings,

On Fri, Oct 08, 2004 at 07:28:53PM +0100, MR K P SCHUPKE wrote:
 Again I disagree... I dont see why you cannot change the implementation
 of lists without changing the interface... Good old lists will
 behave like good old lists - just the implementation would try
 and take advantage of blocking of the data wherever possible.

Keith's point regarding sharing is a crucial counterexample to this
proposed transparency.


On Fri, Oct 08, 2004 at 07:28:53PM +0100, MR K P SCHUPKE wrote:
 Perhaps a pragma to change the implementation of lists would ne
 be a sensible way of selecting the implementation. If a clever 
 way of encoding the node-size (for large cells) is used, then 
 it would be no slower than normal list code... One way of doing this
 would be to only change the format of cells where the next link is
 null (IE the end of the list)... In this case normal cells would
 be encoded _exactly_ as they are at the moment (so no slowdown
 or increase in memory usage) - to encode a large cell, the next
 pointer is null, and then an extra data structure determines
 if this really is the end of the list, or if infact it is a large
 cell (so we need an item count, and a _real_ next cell link,
 plus the data block)

I'd be fine with an alternative data type supporting e.g. packing and
perhaps O(lg(n)) (!!) and (++) (if I get my wishes wrt. choices of
data structures to implement it). The packing semantics are the crucial
bit that probably makes this awkward to do in idiomatic Haskell, but
there are probably low-level GHC extensions or possibly even
standardized low-level operations I'm not aware of that make such
feasible to implement as just a Haskell library. Someone more cognizant
of those things should probably chime in here for my benefit, as my
interest in this is such that I'd be willing to, say, write my own.


-- wli
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Space leaks

2004-10-05 Thread William Lee Irwin III
On Wed, Oct 06, 2004 at 12:27:35PM +1000, Ben Lippmeier wrote:
 Now imagine that state is some large structure. The [x] list is a couple 
 of hundred elements long, and you print out some - but not all - of 
 the [y] elements. While the [y] list remains live, a whole collection of 
  half evaluated intermediate states also remain live -  enter the 
 massive space leak.
 Projects I have written which suffered massive space leaks include:
 - A parallel lazy evaluator,
 http://cs.anu.edu.au/ample.
 The state:the machine state, threads, stacks, the heaps.
 The [x] list: machine instructions.
 The [y] list: profiling info.
 If you don't print out all possible profiling info then all those 
 intermediate states remain live and you've got a massive space leak.
 - An astroids game I wrote for X.
 The state:position, velocities of ship, astroids, missiles.
 The [x] list: ship control data, key codes.
 The [y] list: a list of graphics prims which get rendered to the screen.

Know any tricks for 2D recurrences where only points along the i=j line
(which depend on all points where i = j) are used for the final result?

-- wli
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Lazy streams and unsafeInterleaveIO

2002-12-24 Thread William Lee Irwin III
On Mon, Dec 23, 2002 at 09:05:00AM +, Glynn Clements wrote:
 The main problems with lazy I/O are the lack of control over ordering
 (e.g. you can't delete the file until a stream has been closed, but
 you may not be able to control how long the stream remains open), and
 the inability to handle exceptions (the actual exception won't occur
 until after e.g. getContents has returned).

For my purposes it suffices to have a notion of when I'm done
generating output from the input stream and I close the handle then.
I basically just pass around the input file handles in perpetuity
and either don't close them until the last minute, as it's important to
clean up so I can reuse the action later, or until I'm done generating
output from a complete scan of the thing. This forces the evaluation of
the data structures created from the input, which may linger around
longer than the input stream itself.

Basically, I don't have these issues.

Actually, you -should- be able to delete the file before the stream has
been closed; this is a common idiom for dealing with temporary files.


Bill
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



monadic stack to register machine translator

2002-11-13 Thread William Lee Irwin III
module GT where
import Monad
import Monoid
import MonadState
import MonadWriter
import MonadRWS

-- Just a quick exercise in using monads.
-- Thought it'd be nice to share with the class.

data GOp
= PushVal Integer
| Push Integer
| Pop Integer
| Slide Integer
| Update Integer
| GAdd | GSub | GMul | GDiv | GMod | GPow
| GNeg | GAbs
deriving (Eq, Ord, Read, Show)

type Tmp = Integer

data ROp
= LoadImm Tmp Integer
| RAdd Tmp Tmp Tmp
| RSub Tmp Tmp Tmp
| RMul Tmp Tmp Tmp
| RDiv Tmp Tmp Tmp
| RMod Tmp Tmp Tmp
| RPow Tmp Tmp Tmp
| RNeg Tmp Tmp
| RAbs Tmp Tmp
deriving (Eq, Ord, Read, Show)

type CounterT m t = StateT Integer m t
type StackT t = State [Integer] t

type GST t = RWS () [ROp] (Integer, [Integer]) t

class Stack f where
pushVal, push, pop, update, slide :: Integral t = t - f ()
popVal :: Integral t = f t

instance Integral t = Stack (RWS () [ROp] (t, [t])) where
pushVal n = do
(ctr, stk) - get
put (ctr, fromIntegral n : stk)
popVal = do
(ctr, top:stk) - get
put (ctr, stk)
return (fromIntegral top)
push n = do
(ctr, stk) - get
put (ctr, stk!!fromIntegral n : stk)
pop n = do
(ctr, stk) - get
put (ctr, drop (fromIntegral n) stk)
slide n = do
(ctr, top:stk) - get
put (ctr, top : drop (fromIntegral n) stk)
update n = do
(ctr, top:stk) - get
let (front, _:back) = splitAt (fromIntegral n) stk
put (ctr, front ++ [top] ++ back)

class Counter f where
gen :: Enum t = f t

instance Integral t = Counter (RWS () [ROp] (t, [t])) where
gen = do
(ctr, stk) - get
put (ctr + 1, stk)
return . toEnum . fromIntegral $ ctr + 1

instance (Enum t, Monad m) = Counter (StateT t m) where
gen = do
ctr - get
put $ succ ctr
ctr - get
return . toEnum $ fromEnum ctr

translate gOps = snd $ evalRWS (mapM trans gOps) () (0,[])

trans :: GOp - GST ()
trans i = case i of
PushVal n   -
do
reg - gen
tell [LoadImm reg n]
pushVal reg
Push n  - push n
Pop n   - pop n
Slide n - slide n
Update n- update n
GAdd- doBinOp RAdd
GSub- doBinOp RSub
GMul- doBinOp RMul
GDiv- doBinOp RDiv
GMod- doBinOp RMod
GPow- doBinOp RPow
GNeg- doUnOp RNeg
GAbs- doUnOp RAbs
where
doUnOp op =
do
x - popVal
y - gen
tell [op y x]
pushVal y
doBinOp op =
do
x - popVal
y - popVal
z - gen
tell [op z x y]
pushVal z
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: 1 line simple cat in Haskell

2002-11-13 Thread William Lee Irwin III
On Wed, 13 Nov 2002, William Lee Irwin III wrote:
 main = mapM_ (\h - mapM_ putChar = hGetContents h) = mapM (flip openFile $ 
ReadMode) = getArgs

On Wed, Nov 13, 2002 at 07:46:41AM -0800, Hal Daume III wrote:
   main = interact id

There is a semantic difference here, as the version I posted above takes
files from the command-line, though it does fail to accommodate the
pass-through case, which is handled by:

main = getArgs = \args -if args == [] then interact id else mapM readFile args = 
mapM_ putStr

.. which seems to be a bit above 80 chars. Some library function trickery
is probably in order to cut the if statement down to size. e.g.

nonEmptyMapM_ :: Monad m = m () - (t - m ()) - [t] - m ()
nonEmptyMapM_ def _ []   = def
nonEmptyMapM_ _   f xs@(_:_) = mapM_ f xs
main = getArgs = nonEmptyMapM_ (interact id) ((= putStr) . readFile)

Bill
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: can a lazy language give fast code?

2002-07-31 Thread William Lee Irwin III

On Wed, 31 Jul 2002, Andrew J Bromage wrote:
 Perhaps the ICFP contests are actually fairer as benchmarks than as
 competitions?

On Wed, Jul 31, 2002 at 09:59:31AM +0100, D. Tweed wrote:
 Interesting thought, particularly if the judges announced changes to what
 the problem to be solved was half-way through :-)

Don't forget to change the operating system, programming language,
hardware platform, work site, and lay off 80% of the team. =)


Cheers,
Bill
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Precision of `Double's in Hugs

2002-01-12 Thread William Lee Irwin III

On Sun, Jan 13, 2002 at 01:43:32AM +, Liyang Hu wrote:
 'fraid I've been too spoilt by Debian, haven't built anything from
 source in ages ... I'll notify Hugs' package maintainer and see if I
 can convince him/her to apply this ...

No trouble at all; I feel this should be the default as well, and I'll
set it when I find the other packaging bugfix that was sent to me recently.

By the way, where is the person who sent me the other bugfix? Can you resend?


Thanks,
Bill

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Pointers in Haskell??

2001-12-10 Thread William Lee Irwin III

On Mon, Dec 10, 2001 at 12:58:33AM -0800, Simon Peyton-Jones wrote:
 |  Simon Peyton-Jones. The implementation of functional
 |  programming languages. Prentice-Hall, 1987
 | 
 | is this book could be made available online ? cos on amazon 
 | it seems out of print.
 
 I'm planning to scan it in and make the copy available online.
 In the next month or two.

At long last! Thank you so very much!


Thanks,
Bill

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Primitive types and Prelude shenanigans

2001-02-16 Thread William Lee Irwin III

William Lee Irwin III [EMAIL PROTECTED] pisze:
  literal "0" gets mapped to zero :: AdditiveMonoid t = t
  literal "1" gets mapped to one :: MultiplicativeMonoid t = t
  literal "5" gets mapped to (fromPositiveInteger 5)
  literal "-9" gets mapped to (fromNonZeroInteger -9)

On Fri, Feb 16, 2001 at 08:09:58AM +, Marcin 'Qrczak' Kowalczyk wrote:
 Actually -9 gets mapped to negate (fromInteger 9). At least in theory,
 because in ghc it's fromInteger (-9) AFAIK.

Sorry I was unclear about this, I had in mind that in the scheme I was
going to implement that the sign of the literal value would be discerned
and negative literals carried to fromNonZeroInteger (-9) etc.

William Lee Irwin III [EMAIL PROTECTED] pisze:
 The motivation behind this is so that some fairly typical
 mathematical objects (multiplicative monoid of nonzero integers,
 etc.) can be directly represented by numerical literals (and
 primitive types).

On Fri, Feb 16, 2001 at 08:09:58AM +, Marcin 'Qrczak' Kowalczyk wrote:
 I am definitely against it, especially the zero and one case.
 When one can write 1, he should be able to write 2 too obtaining the
 same type. It's not hard to write zero and one.

The real hope here is to get the distinct zero and one for things that
are already traditionally written that way, like the multiplicative
monoid of nonzero integers or the additive monoid of natural numbers.
Another implication I view as beneficial is that the 0 (and 1) symbols
can be used in vector (and perhaps matrix) contexts without the
possibility that other integer literals might be used inadvertantly.

On Fri, Feb 16, 2001 at 08:09:58AM +, Marcin 'Qrczak' Kowalczyk wrote:
 What next: 0 for nullPtr and []?

It's probably good to point out that this scheme is "permissive" enough,
or more specifically, allows enough fine-grained expressiveness to allow
the symbol to be overloaded for address types on which arithmetic is
permitted, and lists under their natural monoid structure, which I agree
is aesthetically displeasing at the very least, and probably undesirable
to allow by default.

On Fri, Feb 16, 2001 at 08:09:58AM +, Marcin 'Qrczak' Kowalczyk wrote:
 Moreover, the situation where each integer literal means applied
 fromInteger is simple to understand, remember and use. I don't want to
 define a bunch of operations for the same thing. Please keep Prelude's
 rules simple.

I don't think this sort of scheme is appropriate for a standard Prelude
either, though I do think it's interesting to me, and perhaps others. I
don't mean to give the impression that I'm proposing this for inclusion
in any sort of standard Prelude. It's a more radical point in the design
space that I am personally interested in exploring both to discover its
implications for programming (what's really awkward, what things become
convenient, etc.), and to acquaint myself with the aspects of the
compiler pertinent to the handling of primitive types.


Cheers,
Bill

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Primitive types and Prelude shenanigans

2001-02-16 Thread William Lee Irwin III

William Lee Irwin III [EMAIL PROTECTED] pisze:
  literal "5" gets mapped to (fromPositiveInteger 5)
  literal "-9" gets mapped to (fromNonZeroInteger -9)

On Fri, Feb 16, 2001 at 05:42:17PM +, Marcin 'Qrczak' Kowalczyk wrote:
 Note that when a discussed generic Prelude replacement
 framework is done, and ghc's rules are changed to expand -9 to
 negate (fromInteger 9) instead of fromInteger (-9), then you don't
 need uglification of the fromInteger function to be able to define
 types with only nonnegative numeric values. Just define your negate
 in an appropriate class, different from the fromInteger's class.

Good point, the canonical injection from the positive integers into
the various supersets (with structure) thereof handles it nicely.

I foresee:
fromPositiveInteger :: ContainsPositiveIntegers t = PositiveInteger - t
instance ContainsPositiveIntegers Integer where ...
instance AdditiveGroup Integer where ...
negate :: AdditiveGroup t = t - t {- this seems natural, but see below -}

fromPositiveInteger 5 :: ContainsPositiveIntegers t = t

negate $ fromPositiveInteger 5
:: (AdditiveGroup t, ContainsPositiveIntegers t) = t

which is not exactly what I want (and could probably use some aesthetic
tweaking); I had in mind that negative integers would somehow imply a
ContainsNonZeroIntegers or ContainsAllIntegers instance or the like.
The solution actually imposes a rather natural instance (though one
which could cause overlaps):

instance (AdditiveGroup t, ContainsPositiveIntegers t)
= ContainsAllIntegers t where ...

I suppose one big wrinkle comes in when I try to discuss negation in
the multiplicative monoid of nonzero integers. That question already
exists without the Prelude's altered handling of negative literals.
negate . fromInteger $ n just brings it immediately to the surface.

0 and 1 will still take some work, but I don't expect help with them.

Thanks for the simplification!

Cheers,
Bill

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Primitive types and Prelude shenanigans

2001-02-15 Thread William Lee Irwin III

On Wed, Feb 14, 2001 at 02:19:39PM -0800, Simon Peyton-Jones wrote:
 The most immediate and painful stumbling block in Haskell 98 is that
 numeric literals, like 3, turn into (Prelude.fromInt 3), where
 "Prelude.fromInt" really means "the fromInt from the standard Prelude"
 regardless of whether the standard Prelude is imported scope.

 Some while ago I modified GHC to have an extra runtime flag to let you
 change this behaviour.  The effect was that 3 turns into simply
 (fromInt 3), and the "fromInt" means "whatever fromInt is in scope".
 The same thing happens for
   - numeric patterns
   - n+k patterns (the subtraction is whatever is in scope)
   - negation (you get whatever "negate" is in scope, not Prelude.negate)

For the idea for numeric literals I had in mind (which is so radical I
don't intend to seek much, if any help in implementing it other than
general information), even this is insufficient. Some analysis of the
value of the literal would need to be incorporated so that something
like the following happens:

literal "0" gets mapped to zero :: AdditiveMonoid t = t
literal "1" gets mapped to one :: MultiplicativeMonoid t = t
literal "5" gets mapped to (fromPositiveInteger 5)
literal "-9" gets mapped to (fromNonZeroInteger -9)
literal "5.0" gets mapped to (fromPositiveReal 5.0)
literal "-2.0" gets mapped to (fromNonZeroReal -2.0)
literal "0.0" gets mapped to (fromReal 0.0)

etc. A single fromInteger or fromIntegral won't suffice here. The
motivation behind this is so that some fairly typical mathematical
objects (multiplicative monoid of nonzero integers, etc.) can be
directly represented by numerical literals (and primitive types).

I don't for a minute think this is suitable for general use, but
I regard it as an interesting (to me) experiment.

On Wed, Feb 14, 2001 at 02:19:39PM -0800, Simon Peyton-Jones wrote:
 (Of course, this is not Haskell 98 behaviour.)   I think I managed to
 forget to tell anyone of this flag.  And to my surprise I can't find
 it any more! But several changes I made to make it easy are still
 there, so I'll reinstate it shortly.  That should make it easy to
 define a new numeric class structure.

It certainly can't hurt; even if the code doesn't help directly with
my dastardly plans, examining how the handling of overloaded literals
differs will help me understand what's going on.

On Wed, Feb 14, 2001 at 02:19:39PM -0800, Simon Peyton-Jones wrote:
 So much for numerics.  It's much less obvious what to do about booleans.
 Of course, you can always define your own Bool type.  But we're going to
 have to change the type that if-then-else uses, and presumably guards too.
 Take if-then-else.  Currently it desugars to 
   case e of
 True - then-expr
 False - else-expr
 but your new boolean might not have two constructors.  So maybe we should 
 simply assume a function  
   if :: Bool - a - a - a
 and use that for both if-then-else and guards  I wonder what else?

I had in mind that there might be a class of suitable logical values
corresponding to the set of all types suitable for use as such. As
far as I know, the only real restriction on subobject classifiers
for logical values is that it be a pointed set where the point
represents truth. Even if it's not the most general condition, it's
unlikely much can be done computationally without that much. So
since we must be able to compare logical values to see if they're
that distinguished truth value:

\begin{pseudocode}
class Eq lv = LogicalValue lv where
definitelyTrue :: lv
\end{pseudocode}

From here, ifThenElse might be something like:

\begin{morepseudocode}
ifThenElse :: LogicalValue lv = lv - a - a - a
ifThenElse isTrue thenValue elseValue =
case isTrue == definitelyTrue of
BooleanTrue - thenValue
_   - elseValue
\end{morepseudocode}

or something on that order. The if/then/else syntax is really just
a combinator like this with a mixfix syntax, and case is the primitive,
so quite a bit of flexibility is possible given either some "hook" the
mixfix operator will use or perhaps even means for defining arbitrary
mixfix operators. (Of course, a hook is far easier.)

The gains from something like this are questionable, but it's not
about gaining anything for certain, is it? Handling weird logics
could be fun.

On Wed, Feb 14, 2001 at 02:19:39PM -0800, Simon Peyton-Jones wrote:
[interesting example using otherwise in a pattern guard elided]
 And we'll get warnings from the pattern-match compiler.  So perhaps we
 should guarantee that (if otherwise e1 e2) = e1.  

I'm with you on this, things would probably be too weird otherwise.

On Wed, Feb 14, 2001 at 02:19:39PM -0800, Simon Peyton-Jones wrote:
 You may say that's obvious, but the point is that we have to specify
 what can be assumed about an alien Prelude.

There is probably a certain amount of generality 

Re: In hoc signo vinces (Was: Revamping the numeric classes)

2001-02-12 Thread William Lee Irwin III

In a later posting Marcin Kowalczyk says:
 If (+) can be implicitly lifted to functions, then why not signum?
 Note that I would lift neither signum nor (+). I don't feel the need.
  ...

On Mon, Feb 12, 2001 at 09:33:03AM +, Jerzy Karczmarczuk wrote:
 I not only feel the need, but I feel that this is important that the
 additive structure in the codomain is inherited by functions. In a more
 specific context: the fact that linear functionals over a vector space
 form also a vector space, is simply *fundamental* for the quantum 
 mechanics, for the cristallography, etc. You don't need to be a Royal
 Abstractor to see this. 

I see this in a somewhat different light, though I'm in general agreement.

What I'd like to do is to be able to effectively model module structures
in the type system, and furthermore be able to simultaneously impose
distinct module structures on a particular type. For instance, complex
n-vectors are simultaneously C-modules and R-modules. and an arbitrary
commutative ring R is at once a Z-module and an R-module. Linear
functionals, which seem like common beasts (try a partially applied
inner product) live in the mathematical structure Hom_R(M,R) which is once
again an R-module, and, perhaps, by inheriting structure on R, an R'
module from various R'. So how does this affect Prelude design? Examining
a small bit of code could be helpful:

-- The group must be Abelian. I suppose anyone could think of this.
class (AdditiveGroup g, Ring r) = LeftModule g r where
() :: r - g - g

instance AdditiveGroup g = LeftModule g Integer where
n  x   | n == 0 = one
| n  0  = -(n  (-x))
| n  0  = x + (n-1)  x

... and we naturally acquire the sort of structure we're looking for.
But this only shows a possible outcome, and doesn't motivate the
implementation. What _will_ motivate the implementation is the sort
of impact this has on various sorts of code:

(1) The fact that R is an AdditiveGroup immediately makes it a
Z-module, so we have mixed-mode arithmetic by a different
means from the usual implicit coercion.

(2) This sort of business handles vectors quite handily.

(3) The following tidbit of code immediately handles curried innerprods:

instance (AdditiveGroup group, Ring ring) = LeftModule (group-ring) ring
where
r  g = \g' - r  g g'

(4) Why would we want to curry innerprods? I envision:

type SurfaceAPoles foo = SomeGraph (SomeVector foo)

and then

surface :: SurfaceAPoles bar
innerprod v `fmap` normalsOf faces where faces = facesOf surface

(5) Why would we want to do arithmetic on these beasts now that
we think we might need them at all?

If we're doing things like determining the light reflected off of the
various surfaces we will want to scale and add together the various
beasties. Deferring the innerprod operation so we can do this is inelegant
and perhaps inflexible compared to:

lightSources :: [(SomeVector foo - Intensity foo, Position)]
lightSources = getLightSources boundingSomething
reflection = sum $ map (\(f,p) - getSourceWeight p * f) lightSources
reflection `fmap` normalsOf faces where faces = facesOf surface

and now in the lightSources perhaps ambient light can be represented
very conveniently, or at least the function type serves to abstract out
the manner in which the orientation of a surface determines the amount
of light reflected off it.

(My apologies for whatever inaccuracies are happening with the optics
here, it's quite far removed from my direct experience.)

Furthermore, within things like small interpreters, it is perhaps
convenient to represent the semantic values of various expressions by
function types. If one should care to define arithmetic on vectors and
vector functions in the interpreted language, support in the source
language allows a more direct approach. This would arise within solid
modelling and graphics once again, as little languages are often used
to describe objects, images, and the like.

How can we anticipate all the possible usages of pretty-looking vector
and matrix algebra? I suspect graphics isn't the only place where
linear algebra could arise. All sorts of differential equation models
of physical phenomena, Markov models of state transition systems, even
economic models at some point require linear algebra in their
computational methods.  It's something I at least regard as a fairly
fundamental and important aspect of computation. And to me, that means
that the full power of the language should be applied toward beautifying,
simplifying, and otherwise enabling linear algebraic computations.


Cheers,
Bill
P.S.:   Please forgive the harangue-like nature of the post, it's the best
I could do at 3AM.

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Show, Eq not necessary for Num [Was: Revamping the numeric classes]

2001-02-11 Thread William Lee Irwin III

Sun, 11 Feb 2001 13:37:28 +1300, Brian Boutel [EMAIL PROTECTED] pisze:
 Can you demonstrate a revised hierarchy without Eq? What would
 happen to Ord and the numeric classes with default class method
 definitions that use (==) either explicitly or in pattern matching
 against numeric literals?

I anticipate that some restructuring of the numeric classes must be
done in order to accomplish this. I am, of course, attempting to
contrive such a beast for my own personal use.

On Sun, Feb 11, 2001 at 07:59:38AM +, Marcin 'Qrczak' Kowalczyk wrote:
 OK, then you can't write these default method definitions.
 I'm against removing Eq from the numeric hierarchy, against making Num
 instances for functions, but I would probably remove Show. I haven't
 seen a sensible proposal of a replacement of the whole hierarchy.

Well, there are a couple of problems with someone like myself trying
to make such a proposal. First, I'm a bit too marginalized and/or
committed to a radical alternative. Second, I don't have the right
associations or perhaps other resources.

Removing Eq sounds like a good idea to me, in all honesty, though I
think numeric instances for functions (at least by default) aren't
great ideas. More details follow:

Regarding Eq, there are other types besides functions which might
not be good ideas to define equality on, either because they're not
efficiently implementable or are still inappropriate. Matrix types
aren't good candidates for defining equality, for one. Another one
you might not want to define equality on are formal power series
represented by infinite lists, since equality tests will never
terminate. A third counterexample comes, of course, from graphics,
where one might want to conveniently scale and translate solids.
Testing meshes and surface representations for equality is once
again not a great idea. Perhaps these counterexamples are a little
contrived, but perhaps other people can come up with better ones.

As far as the function instances of numeric types, there are some
nasty properties that they have that probably make it a bad idea.
In particular, I discovered that numeric literals' fromInteger
property creates the possibility that something which is supposed
to be a scalar or some other numeric result might accidentally be
applied. For instance, given an expression with an intermediate
numeric result like:

f u v . g x y $ h z

which is expected to produce a number, one could accidentally apply
a numeric literal or something bound to one to some arguments, creating
a bug. So this is for at least partial agreement, though I think it
should be available in controlled circumstances. Local module
importations and/or scoped instances might help here, or perhaps
separating out code that relies upon them into a module where the
instance is in scope, as it probably needs control which is that tight.

Sun, 11 Feb 2001 13:37:28 +1300, Brian Boutel [EMAIL PROTECTED] pisze:
 In an instance declaration, if a method requires operations of
 another class which is not a superclass of the class being instanced,
 it is sufficient to place the requirement in the context,

On Sun, Feb 11, 2001 at 07:59:38AM +, Marcin 'Qrczak' Kowalczyk wrote:
 Better: it is sufficient if the right instance is defined somewhere.

Again, I'd be careful with this idea. It's poor design to unnecessarily
restrict the generality of code. Of course, it's poor design to not try
to enforce necessary conditions in the type system, too, which is why
library design is nontrivial. And, of course, keeping it simple enough
for use by the general populace (or whatever semblance thereof exists
within the Haskell community) might well conflict with the desires of
persons like myself who could easily fall prey to the accusation that
they're trying to turn Haskell into a computer algebra system, and adds
yet another constraint to the library design making it even tougher.


Cheers,
Bill

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Show, Eq not necessary for Num [Was: Revamping the numeric classes]

2001-02-11 Thread William Lee Irwin III

On 11-Feb-2001, Brian Boutel [EMAIL PROTECTED] wrote:
 There may be some misunderstanding here. If you are talking about type
 for which equality is always undefined, then I agree with you, but that
 is not what I was talking about. I was thinking about types where
 equality is defined for some pairs of argument values and undefined for
 others - I think the original example was some kind of arbitrary
 precision reals.

On Sun, Feb 11, 2001 at 06:24:33PM +1100, Fergus Henderson wrote:
 The original example was treating functions as a numeric type.  In the
 case of functions, computing equality is almost always infeasible.
 But you can easily define addition etc. pointwise:
   
   f + g = (\ x - f x + g x)

I have a fairly complete implementation of this with dummy instances of
Eq and Show for those who want to see the consequences of this. I found,
interestingly enough, that any type constructor f with the following
three properties could have an instance of Num defined upon f a:

(1) it has a unary constructor to lift scalars 
(2) it has a Functor instance
(3) it has an analogue of zip which can be defined upon it

or, more precisely:

\begin{code}
instance (Eq (f a), Show (f a), Num a, Functor f,
Zippable f, HasUnaryCon f) = Num (f a)
where
f + g = fmap (uncurry (+)) $ fzip f g
f * g = fmap (uncurry (*)) $ fzip f g
f - g = fmap (uncurry (-)) $ fzip f g
negate = fmap negate
abs = fmap abs
signum = fmap signum
fromInteger = unaryCon . fromInteger

class Zippable f where
fzip :: f a - f b - f (a,b)

class HasUnaryCon f where
unaryCon :: a - f a

instance Functor ((-) a) where
fmap = (.)

instance Zippable ((-) a) where
fzip f g = \x - (f x, g x)

instance HasUnaryCon ((-) a) where
unaryCon = const
\end{code}

and this generalizes nicely to other data types:

\begin{code}
instance Zippable Maybe where
fzip (Just x) (Just y) = Just (x,y)
fzip _ Nothing = Nothing
fzip Nothing _ = Nothing

instance HasUnaryCon Maybe where
unaryCon = Just

instance Zippable [ ] where
fzip = zip

instance HasUnaryCon [ ] where
unaryCon = cycle . (:[])
\end{code}

On 11-Feb-2001, Brian Boutel [EMAIL PROTECTED] wrote:
 Returning to the basic issue, I understood the desire to remove Eq as a
 superclass of Num was so that people were not required to implement
 equality if they did not need it, not that there were significant
 numbers of useful numeric types for which equality was not meaningful. 

On Sun, Feb 11, 2001 at 06:24:33PM +1100, Fergus Henderson wrote:
 The argument is the latter, with functions as the canonical example.

Well, usually equality as a mathematical concept is meaningful, but
either not effectively or efficiently computable. Given an enumerable
and bounded domain, equality may be defined (perhaps inefficiently)
on functions by

\begin{code}
instance (Enum a, Bounded a, Eq b) = Eq (a-b) where
f == g = all (uncurry (==))
$ zipWith (\x - (f x, g x)) [minBound..maxBound]
\end{code}

and as I've said in another post, equality instances on data structures
expected to be infinite, very large, or where the semantics of equality
are make it difficult to compute, or perhaps even cases where it's just
not useful are also not good to be forced.


Cheers,
Bill

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: A sample revised prelude for numeric classes

2001-02-11 Thread William Lee Irwin III

On Sun, Feb 11, 2001 at 05:42:15PM -0500, Dylan Thurston wrote:
 I've started writing up a more concrete proposal for what I'd like the
 Prelude to look like in terms of numeric classes.  Please find it
 attached below.  It's still a draft and rather incomplete, but please
 let me know any comments, questions, or suggestions.

This is great, it gets something concrete out there to comment on, which
is probably quite a bit of what needs to happen.

For brevity's sake, I'll have to chop up your message a bit.

 (1) The current Prelude defines no semantics for the fundamental
 operations.  For instance, presumably addition should be
 associative (or come as close as feasible), but this is not
 mentioned anywhere.

This is something serious, as I sort of took for granted the various
properties of operations etc. I'm glad you pointed it out.

 (2) There are some superfluous superclasses.  For instance, Eq and
 Show are superclasses of Num.  Consider the data type
 
  data IntegerFunction a = IF (a - Integer)
 
 One can reasonably define all the methods of Num for
 IntegerFunction a (satisfying good semantics), but it is
 impossible to define non-bottom instances of Eq and Show.
 
 In general, superclass relationship should indicate some semantic
 connection between the two classes.

It's possible to define non-bottom instances, for instance:

instance Eq (a-b) where
_ == _ = False

instance Show (a-b) where
show = const "function"

I suspect you're aware of this and had in mind the constraint that
they should also respect the invariants and laws of the classes.

  class (Additive a) = Num a where
  (*) :: a - a - a
  one   :: a
  fromInteger :: Integer - a

 Num encapsulates the mathematical structure of a (not necessarily
 commutative) ring, with the laws
 
   a * (b * c) === (a * b) * c
   one * a === a
   a * one === a
   a * (b + c) === a * b + a * c
 
 Typical examples include integers, matrices, and quaternions.

There is an additional property of zero being neglected here, namely
that it is an annihilator. That is,

zero * x === zero
x * zero === zero

Again, it's probably a reasonable compromise not to accommodate
nonassociative algebras, though an important application of them
lies within graphics, namely 3-vectors with the cross product.

 "reduceRepeat op a n" is an auxiliary function that, for an
 associative operation "op", computes the same value as
 
   reduceRepeat op a n = foldr1 op (repeat n a)
 
 but applies "op" O(log n) times.  A sample implementation is below.

This is a terrific idea, and I'm glad someone has at last proposed
using it.

  class (Num a) = Integral a where
  div, mod :: a - a - a
  divMod :: a - a - (a,a)
  gcd, lcm :: a - a - a
  extendedGCD :: a - a - (a,a,a)

While I'm wholeheartedly in favor of the Euclidean algorithm idea, I
suspect that more structure (i.e. separating it out to another class)
could be useful, for instance, formal power series' over Z are integral
domains, but are not a Euclidean domain because their residue classes
aren't computable by a finite process. Various esoteric rings like
Z[sqrt(k)] for various positive and negative integer k can also make
this dependence explode, though they're probably too rare to matter.

 TODO: quot, rem partially defined.  Explain.
 The default definition of extendedGCD above should not be taken as
 canonical (unlike most default definitions); for some Integral
 instances, the algorithm could diverge, might not satisfy the laws
 above, etc.
 TODO: (/) is only partially defined.  How to specify?  Add a member
   isInvertible :: a - Bool?
 Typical examples include rationals, the real numbers, and rational
 functions (ratios of polynomials).

It's too easy to make it a partial function to really consider this,
but if you wanted to go over the top (and you don't) you want the
multiplicative group of units to be the type of the argument (and
hence result) of recip.

  class (Num a, Additive b) = Powerful a b where
  ...
 I don't know interesting examples of this structure besides the
 instances above defined above and the Floating class below.
 "Positive" is a type constructor that asserts that its argument is =
 0; "positive" makes this assertion.  I am not sure how this will
 interact with defaulting arguments so that one can write
 
   x ^ 5
 
 without constraining x to be of Fractional type.

What you're really trying to capture here is the (right?) Z-module-like
structure of the multiplicative monoid in a commutative ring. There are
some weird things going on here I'm not sure about, namely:

(1) in an arbitary commutative ring (or multiplicative semigroup),
the function can (at best) be defined as
(^) :: ring - NaturalNumbers - ring
That is, only the natural numbers can act on ring to produce
an exponentiation-like operation.

Re: A sample revised prelude for numeric classes

2001-02-11 Thread William Lee Irwin III

At 2001-02-11 14:42, Dylan Thurston wrote:
 I've started writing up a more concrete proposal for what I'd like the
 Prelude to look like in terms of numeric classes.  Please find it
 attached below.  It's still a draft and rather incomplete, but please
 let me know any comments, questions, or suggestions.

On Sun, Feb 11, 2001 at 04:03:37PM -0800, Ashley Yakeley wrote:
 Apologies if this has been discussed and I missed it. When it comes to 
 writing a 'geek' prelude, what was wrong with the Basic Algebra Proposal 
 found in ftp://ftp.botik.ru/pub/local/Mechveliani/basAlgPropos/ ? 
 Perhaps it could benefit from multi-parameter classes?

I'm not sure if there is anything concrete wrong with it, in fact, I'd
like to see it made into a Prelude, but there are several reasons why
I don't think it's being discussed here in the context of an alternative
for a Prelude.

(1) It's widely considered too complex and/or too mathematically
involved for the general populace (or whatever semblance thereof
exists within the Haskell community).
(2) As a "Geek Prelude", it's considered to have some aesthetic
and/or usability issues.
(3) For persons as insane as myself, it's actually not radical enough.

My commentary on it thus far is that I see it as high-quality software
that could not only already serve as a "Geek Prelude" for many users, but
upon which could also be based implementations and designs of future
"Geek Preludes". The fact that no one has discussed it is probably due
to a desire not to return to previous flamewars, but it should almost
definitely be discussed as a reference point.

I've actually been hoping that Mechveliani would chime in and comment on
the various ideas, since he's actually already been through the motions
of implementing an alternative Prelude and seen what sort of
difficulties arise from actually trying to do these things various ways.


Cheers,
Bill

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: A sample revised prelude for numeric classes

2001-02-11 Thread William Lee Irwin III

On Sun, Feb 11, 2001 at 10:56:29PM -0500, Dylan Thurston wrote:
 It follows:
   zero * x === (one - one) * x === one * x - one * x === x - x === zero

Heh, you've caught me sleeping. =)

On Sun, Feb 11, 2001 at 10:56:29PM -0500, Dylan Thurston wrote:
 I tried to write the definitions in a way that could be defined for
 any unique factorization domain, not necessarily Euclidean: just take
 the two numbers, write them as a unit times prime factors in canonical
 form, and take the product of the common factors and call that the
 GCD.  On reflection, extendedGCD probably isn't easy to write in
 general.

Well, factorizing things in various UFD's doesn't sound easy to me, but
at this point I'm already having to do some reaching for counterexamples
of practical programs where this matters. It could end up being a useless
class method in some instances, so I'm wary of it.

On Sun, Feb 11, 2001 at 10:56:29PM -0500, Dylan Thurston wrote:
 What operations would you propose to encapsulate an integral domain
 (rather than a UFD)?

I'm not necessarily proposing a different set of operations to
encapsulate them, but rather that gcd and cousins be split off into
another subclass. Your design decisions in general appear to be
striking a good chord, so I'll just bring up the idea and let you
decide whether it should be done that way and so on.

On Sun, Feb 11, 2001 at 10:56:29PM -0500, Dylan Thurston wrote:
 Formal power series over Z are an interesting example; I'll think
 about it.  On first blush, it seems like if you represented them as
 lazy lists you might be able to compute the remainder term by term.

Consider taking of the residue of a truly infinite member of Z[[x]]
mod an ideal generated by a polynomial, e.g. 1/(1-x) mod (1+x^2).
You can take the residue of each term of 1/(1-x), so x^(2n) - (-1)^n
and x^(2n+1) - (-1)^n x, but you end up with an infinite number of
(nonzero!) residues to add up and hence encounter the troubles with
processes not being finite that I mentioned.

On Sun, Feb 11, 2001 at 06:48:42PM -0800, William Lee Irwin III wrote:
  (3) Under some condition I don't seem to be able to formulate
  offhand, one can do
  (^) :: ring - ring - ring
  Now the ring (or perhaps more generally some related ring)
  acts on ring to produce an expontiation operation like what
  is typically thought of for real numbers. Anyone with good
  ideas as to what the appropriate conditions are here, please
  speak up.
  (Be careful, w ^ z = exp (z * log w) behaves badly for w  0
  on the reals.)

On Sun, Feb 11, 2001 at 10:56:29PM -0500, Dylan Thurston wrote:
 For complex numbers as well, this operation has problems because of
 branch cuts.  It does satisfy that identity I mentioned, but is not
 continuous in the first argument.
 It is more common to see functions like exp be well defined (for more
 general additive groups) than to see the full (^) be defined.

I think it's nice to have the Cauchy principal value versions of things
floating around.  I know at least that I've had call for using the CPV
of exponentiation (and it's not hard to contrive an implementation),
but I'm almost definitely an atypical user. (Note, (**) does this today.)

On Sun, Feb 11, 2001 at 06:48:42PM -0800, William Lee Irwin III wrote:
 I'm not convinced that Real is a great name for this, or that this
 is really the right type for all this stuff. I'd still like to see
 abs and signum generalized to vector spaces.

On Sun, Feb 11, 2001 at 10:56:29PM -0500, Dylan Thurston wrote:
 After thinking about this, I decided that I would be happy calling the
 comparable operation on vector spaces "norm":
 a) it's compatible with mathematical usage
 b) it keeps the Prelude itself simple.
 It's unfortunate that the operation for complex numbers can't be
 called "abs", but I think it's reasonable.

I'm not entirely sure, but I think part of the reason this hasn't been
done already is because it's perhaps painful to statically type
dimensionality in vector spaces. On the other hand, assuming that the
user has perhaps contrived a representation satisfactory to him or her,
defining a class on the necessary type constructor shouldn't be tough
at all.

In a side note, it seems conventional to use abs and signum on complex
numbers (and functions), and also perhaps the same symbol as abs for
the norm on vectors and vector functions. It seems the distinction
drawn is that abs is definitely pointwise and the norm more often does
some sort of shenanigan like L^p norms etc. How much of this convention
should be preserved seems like a design decision, but perhaps one that
should be made explicit.

On Sun, Feb 11, 2001 at 06:48:42PM -0800, William Lee Irwin III wrote:
 good stuff on lattices deleted ...and Ord defines a partial order
 (and hence induces Eq) on a type.

On Sun, Feb 11, 2001 at 10:56:29PM -0500, Dylan Thur

Re: A sample revised prelude for numeric classes

2001-02-11 Thread William Lee Irwin III

On Mon, Feb 12, 2001 at 05:24:37PM +1300, Brian Boutel wrote:
 Including laws was discussed very early in the development of the
 language, but was rejected. IIRC Miranda had them. The argument against
 laws was that their presence might mislead users into the assumption
 that they did hold, yet if they were not enforcable then they might not
 hold and that could have serious consequences. Also, some laws do not
 hold in domains with bottom, e.g. a + (negate a) === 0 is only true if a
 is not bottom. 

I actually think it would be useful to have them and optionally
dynamically enforce them, or at least whichever ones are computable, as
a compile-time option. This would be _extremely_ useful for debugging
purposes, and I, at the very least, would use it. I think Eiffel does
something like this, can anyone else comment?

This, of course, is a language extension, and so probably belongs in
a different discussion from the rest of all this.

Dylan Thurston wrote:
 class (Additive a) = Num a where
 (*) :: a - a - a
 one :: a
 fromInteger :: Integer - a
   -- Minimal definition: (*), one
 fromInteger 0 = zero
 fromInteger n | n  0 = negate (fromInteger (-n))
 fromInteger n | n  0 = reduceRepeat (+) one n

On Mon, Feb 12, 2001 at 05:24:37PM +1300, Brian Boutel wrote:
 This definition requires both Eq and Ord!!!

Only on Integer, not on a.

On Mon, Feb 12, 2001 at 05:24:37PM +1300, Brian Boutel wrote:
 As does this one:

Dylan Thurston wrote:
 class (Num a, Additive b) = Powerful a b where
 (^) :: a - b - a
 instance (Num a) = Powerful a (Positive Integer) where
 a ^ 0 = one
 a ^ n = reduceRepeated (*) a n
 instance (Fractional a) = Powerful a Integer where
 a ^ n | n  0 = recip (a ^ (negate n))
 a ^ n = a ^ (positive n)

I should note that both of these definitions which require Eq and
Ord only require it on Integer.

On Mon, Feb 12, 2001 at 05:24:37PM +1300, Brian Boutel wrote:
 and several others further down. 

I'm not sure which ones you hit on, though I'm sure we'd all be more
than happy to counter-comment on them or repair the inadequacies.

Dylan Thurston wrote:
 (4) In some cases, the hierarchy is not finely-grained enough:
 operations that are often defined independently are lumped
 together.  For instance, in a financial application one might want
 a type "Dollar", or in a graphics application one might want a
 type "Vector".  It is reasonable to add two Vectors or Dollars,
 but not, in general, reasonable to multiply them.  But the
 programmer is currently forced to define a method for (*) when she
 defines a method for (+).

On Mon, Feb 12, 2001 at 05:24:37PM +1300, Brian Boutel wrote:
 Why do you stop at allowing addition on Dollars and not include
 multiplication by a scalar? Division is also readily defined on Dollar
 values, with a scalar result, but this, too, is not available in the
 proposal. 

I can comment a little on this, though I can't speak for someone else's
design decisions. In general, the results of division and multiplication
for units have a different result type than those of the arguments. This
makes defining them by type class overloading either require existential
wrappers or makes them otherwise difficult or impossible to define.

On Mon, Feb 12, 2001 at 05:24:37PM +1300, Brian Boutel wrote:
 Having Units as types, with the idea of preventing adding Apples to
 Oranges, or Dollars to Roubles, is a venerable idea, but is not in
 widespread use in actual programming languages. Why not?

I'm probably even less qualified to comment on this, but I'll conjecture
that the typing disciplines of most languages make it impractical. I
suspect it could be possible in Haskell.

On Mon, Feb 12, 2001 at 05:24:37PM +1300, Brian Boutel wrote:
 Vectors, too, can be multiplied, producing both scalar- and
 vector-products.

Exterior and inner products both encounter much the same troubles as
defining arithmetic on types with units attached, with the additional
complication that statically typing dimensionality is nontrivial.

On Mon, Feb 12, 2001 at 05:24:37PM +1300, Brian Boutel wrote:
 It seems that you are content with going as far as the proposal permits,
 though you cannot define, even within the revised Class system, all the
 common and useful operations on these types. This is the same situation
 as with Haskell as it stands. The question is whether the (IMHO)
 marginal increase in flexibility is worth the cost.
 This is not an argument for not separating Additive from Num, but it
 does weaken the argument for doing it.

I'm not convinced of this, though I _am_ convinced that a general
framework for units would probably be useful to have in either a
standard or add-on library distributed with Haskell, or perhaps to
attempt to address units even within the standard Prelude if it's
simple enough. Are you up to perhaps taking a stab at this? Perhaps
if you tried it within the framework 

Re: A sample revised prelude for numeric classes

2001-02-11 Thread William Lee Irwin III

At 2001-02-11 21:18, Tom Pledger wrote:
The main complication is that the type system needs to deal with
integer exponents of dimensions, if it's to do the job well.

On Sun, Feb 11, 2001 at 10:16:02PM -0800, Ashley Yakeley wrote:
 Very occasionally non-integer or 'fractal' exponents of dimensions are 
 useful. For instance, geographic coastlines can be measured in km ^ n, 
 where 1 = n  2. This doesn't stop the CIA world factbook listing all 
 coastline lengths in straight kilometres, however.

This is pretty rare, and it's also fairly tough to represent points in
spaces of fractional dimension. I'll bet the sorts of complications
necessary to do so would immediately exclude it from consideration in
the design of a standard library, but nevertheless would be interesting
to hear about. Can you comment further on this?

On Sun, Feb 11, 2001 at 10:16:02PM -0800, Ashley Yakeley wrote:
 More unit weirdness occurs with logarithms. For instance, if y and x are 
 distances, log (y/x) = log y - log x. Note that 'log x' is some number + 
 log (metre). Strange, huh?

If you (or anyone else) could comment on what sorts of units would be
appropriate for the result type of a logarithm operation, I'd be glad to
hear it. I don't know what the result type of this example is supposed
to be if the units of a number are encoded in the type.

On Sun, Feb 11, 2001 at 10:16:02PM -0800, Ashley Yakeley wrote:
 Interestingly, in C++ you can parameterise types by values. For instance:
[interesting C++ example elided]
 Can you do this sort of thing in Haskell?

No, in general I find it necessary to construct some sort of set of
types parallel to the actual data type, define some sort of existential
data type encompassing the set of all types which can represent one of
those appropriate values, and "lift" things to that type by means of
sample arguments. I usually like ensuring that the types representing
things like integers never actually have any sort of data manifest,
i.e. the sample arguments are always undefined. This is a bit awkward.

I think Okasaki's work on square matrices and perhaps some other ideas
should be exploited for this sort of thing, as there is quite a bit of
opposition to the usage of sample arguments. I'd like to see a library
for vector spaces based on similar ideas. I seem to be caught up in
other issues caused by mucking with fundamental data types' definitions,
my working knowldedge of techniques like Okasaki's is insufficient for
the task, and my design concepts are probably too radical for general
usage, so I'm probably not the man for the job, though I will very
likely take a stab at such a beast for my own edification.


Cheers,
Bill

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Semantics of signum

2001-02-10 Thread William Lee Irwin III

On Sat, Feb 10, 2001 at 11:25:46AM -0500, Dylan Thurston wrote:
 Can you elaborate?  What do you mean by signum for functions?  The 
 pointwise signum?  Then abs would be the pointwise abs as well, right?
 That might work, but I'm nervous because I don't know the semantics
 for signum/abs in such generality.  What identities should they
 satisfy?  (The current Haskell report says nothing about the meaning
 of these operations, in the same way it says nothing about the meaning
 of (+), (-), and (*).  Compare this to the situation for the Monad class,
 where the fundamental identities are given.  Oddly, there are identities
 listed for 'quot', 'rem', 'div', and 'mod'.  For +, -, and * I can guess
 what identities they should satisfy, but not for signum and abs.)

Pointwise signum and abs are common in analysis. The identity is:

signum f * abs f = f

I've already done the pointwise case. As I've pointed out before,
abs has the wrong type for doing anything with vector spaces, though,
perhaps, abs is a distinct notion from norm.

On Sat, Feb 10, 2001 at 11:25:46AM -0500, Dylan Thurston wrote:
 (Note that pointwise abs of functions yields a positive function, which
 are not ordered but do have a sensible notion of max and min.)

The ordering you're looking for needs a norm. If you really want a
notion of size on functions, you'll have to do it with something like
one of the L^p norms for continua and the \ell^p norms for discrete
spaces which are instances of Enum. There is a slightly problematic
aspect with this in that the domain of the function does not entirely
determine the norm, and furthermore adequately dealing with the
different notions of measure on these spaces with the type system is
probably also intractable. The sorts of issues raised by trying to
define norms on functions probably rather quickly relegate it to
something the user should explicitly define, as opposed to something
that should appear in a Prelude standard or otherwise. That said,
one could do something like

instance Enum a = Enum (MyTree a) where
... -- it's tricky, but possible, you figure it out

instance (Enum a, RealFloat b) = NormedSpace (MyTree a - b) where
norm f = approxsum $ zipWith (*) (map f . enumFrom $ toEnum 0) weights
where
weights = map (\x - 1/factorial x) [0..]
approxsum [] = 0
approxsum (x:xs)| x  1.0e-6 = 0
| otherwise = x + approxsum xs

and then do the usual junk where

instance NormedSpace a = Ord a where
f  g = norm f  norm g
...


Cheers,
Bill

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: In hoc signo vinces (Was: Revamping the numeric classes)

2001-02-09 Thread William Lee Irwin III

Fri, 09 Feb 2001 10:52:39 +, Jerzy Karczmarczuk pisze:
 Again, a violation of the orthogonality principle. Needing division
 just to define signum. And of course a completely different approach
 do define the signum of integers. Or of polynomials...

On Fri, Feb 09, 2001 at 07:19:21PM +, Marcin 'Qrczak' Kowalczyk wrote:
 So what? That's why it's a class method and not a plain function with
 a single definition.
 
 Multiplication of matrices is implemented differently than
 multiplication of integers. Why don't you call it a violation of the
 orthogonality principle (whatever it is)?

Matrix rings actually manage to expose the inappropriateness of signum
and abs' definitions and relationships to Num very well:

class  (Eq a, Show a) = Num a  where
(+), (-), (*)   :: a - a - a
negate  :: a - a
abs, signum :: a - a
fromInteger :: Integer - a
fromInt :: Int - a -- partain: Glasgow extension

Pure arithmetic ((+), (-), (*), negate) works just fine.

But there are no good injections to use for fromInteger or fromInt,
the type of abs is wrong if it's going to be a norm, and it's not
clear that signum makes much sense.

So we have two totally inappropriate operations (fromInteger and
fromInt), one operation which has the wrong type (abs), and an operation
which doesn't have well-defined meaning (signum) on matrices. If
we want people doing graphics or linear algebraic computations to
be able to go about their business with their code looking like
ordinary arithmetic, this is, perhaps, a real concern.

I believe that these applications are widespread enough to be concerned
about how the library design affects their aesthetics.


Cheers,
Bill
-- 
craving Weak coffee is only fit for lemmas.
--

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Revamping the numeric classes

2001-02-08 Thread William Lee Irwin III

On Thu, Feb 08, 2001 at 08:30:31PM +, Marcin 'Qrczak' Kowalczyk wrote:
 Signum doesn't require Ord.
 signum z = z / abs z
 for complex numbers.

I'd be careful here.

\begin{code}
signum 0 = 0
signum z = z / abs z
\end{code}

This is, perhaps, neither precise nor general enough.

The signum/abs pair seem to represent direction and magnitude.
According to the line of reasoning in some of the earlier posts in this
flamewar, the following constraints:

(1) z = signum z * abs z where * is appropriately defined
(2) abs $ signum z = 1

should be enforced, if possible, by the type system. This suggests
that for any type having a vector space structure over Fractional
(or whatever the hierarchy you're brewing up uses for rings with
a division partial function on them) that the result type of signum
lives in a more restricted universe, perhaps even one with a different
structure (operations defined on it, set of elements) than the argument
type, and it seems more than possible to parametrize it on the argument
type. The abs is in fact a norm, and the signum projects V^n - V^n / V.
Attempts to define these things on Gaussian integers, p-adic numbers,
polynomial rings, and rational points on elliptic curves will quickly
reveal limitations of the stock class hierarchy.

Now, whether it's actually desirable to scare newcomers to the language
into math phobia, wetting their pants, and running screaming with
subtleties like this suggests perhaps that one or more "alternative
Preludes" may be desirable to have. There is a standard Prelude, why not
a nonstandard one or two? We have the source. The needs of the geek do
not outweigh the needs of the many. Hence, we can cook up a few Preludes
or so on our own, and certainly if we can tinker enough to spam the list
with counterexamples and suggestions of what we'd like the Prelude to
have, we can compile up a Prelude for ourselves with our "suggested
changes" included and perhaps one day knock together something which can
actually be used and has been tested, no?

The Standard Prelude serves its purpose well and accommodates the
largest cross-section of users. Perhaps a Geek Prelude could
accommodate the few of us who do need these sorts of schenanigans.


Cheers,
Bill
-- 
j0][nD33R:#math Excel/Spreadsheet Q: What is the formula for finding
out the time passed between two dates and or two times in the same day?
MatroiDN:#math excel/spreadsheet? Hmm, this is math? Is there a GTM on
excel or maybe an article in annals about spreadsheets or maybe
there's a link from wolfram to doing your own computer work, eh?
danprime:#math jeeem, haven't you seen "Introduction to Algebraic Excel"?
danprime:#math or "Spreadsheet Space Embeddings in 2-Manifolds"
brouwer:#math i got my phd in spreadsheet theory
brouwer:#math i did my thesis on the spreadsheet conjecture

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: A simple problem

2001-01-18 Thread William Lee Irwin III

At 2001-01-18 15:57, William Lee Irwin III wrote:
class C a where
  fun :: a - Integer

instance Integral a = C a where
  fun = toInteger . succ
 
On Thu, Jan 18, 2001 at 04:04:01PM -0800, Ashley Yakeley wrote:
 Gives "syntax error in instance head (constructor expected)" at the 
 'instance' line in Hugs. Is there a option I need to turn on or something?

Yes, when hugs is invoked, invoke it with the -98 option to turn off the
strict Haskell 98 compliance.

Cheers,
Bill

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe