Re: GHC and Haskell 98

2011-06-17 Thread Uwe Hollerbach
On 6/17/11, Daniel Fischer daniel.is.fisc...@googlemail.com wrote:
 On Friday 17 June 2011, 17:11:39, Jacques Carette wrote:
 I favour Plan A.

 +1

 ___
 Glasgow-haskell-users mailing list
 Glasgow-haskell-users@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


+2

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


[Haskell-cafe] ANNOUNCE: UMM-0.3.0

2010-07-11 Thread Uwe Hollerbach
Hi, all, I've just uploaded version 0.3.0 of umm, my small
money-manager program, to hackage. This version does nicer plotting of
data than before (depends on gnuplot). Have a look if you... errr...
like money :-)

regards, Uwe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell and scripting

2010-05-05 Thread Uwe Hollerbach
As the author of haskeem, I'm thrilled that you are considering it,
but to be honest I'm not quite sure it's embeddable in the way (I
think) you want. If you want to give it a try, though, I'd be more
than happy to try to help.

best, Uwe

On 5/5/10, Limestraël limestr...@gmail.com wrote:
 Thanks!

 However, I don't forget that my goal is to get a system monitor
 configuration language.

 Lua may have some functional components, it remains imperative, I think a
 more declarative language like Scheme would be more appropriate (and there
 is also a scheme interpreter, haskeem).
 What do you think about it?


 2010/5/5 Ivan Lazar Miljenovic ivan.miljeno...@gmail.com

 Limestraël limestr...@gmail.com writes:

  How do you embed Lua in Haskell?

 http://hackage.haskell.org/package/hslua

 --
 Ivan Lazar Miljenovic
 ivan.miljeno...@gmail.com
 IvanMiljenovic.wordpress.com


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Tokenizing and Parsec

2010-01-11 Thread Uwe Hollerbach
Hi, Günther, you could write functions that pattern-match on various
sequences of tokens in a list, you could for example have a look at
the file Evaluator.hs in my scheme interpreter haskeem, or you could
build up more-complex data structures entirely within parsec, and for
this I would point you at the file Parser.hs in my accounting program
umm; both are on hackage. Undoubtedly there are many more and probably
better examples, but I think these are at least a start...

regards, Uwe

On 1/11/10, Günther Schmidt gue.schm...@web.de wrote:
 Hi all,

 I've used Parsec to tokenize data from a text file. It was actually
 quite easy, everything is correctly identified.

 So now I have a list/stream of self defined Tokens and now I'm stuck.
 Because now I need to write my own parsec-token-parsers to parse this
 token stream in a context-sensitive way.

 Uhm, how do I that then?

 Günther

 a Token is something like:

 data Token = ZE String
 | OPS
 | OPSShort String
 | OPSLong String
 | Other String
 | ZECd String
   deriving Show


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] haskell code from hi

2009-11-21 Thread Uwe Hollerbach
Ouch... my condolences, but I think you're screwed. I think the .hi
files are purely interface info, and the .o files have all the info on
what to actually do (and getting to .hs files from .hi+.o is gonna be
like going from sausage to pig, in any case). If you haven't messed
with the disk, I think your best bet might be to try and undelete
files. That might be as messy as looking at the raw disk image and
trying to recover disk sectors, or possibly there are still entire
files there that are just not referenced by directory entries. Either
(or any) way, it's a bit chancy...

Uwe

On 11/21/09, Ozgur Akgun ozgurak...@gmail.com wrote:
 Is there possibly a way of getting source code, given a *.hi file?

 Yes you're right I deleted all my *.hs files, while trying to remove *.hi
 ones!!

 Desperately,



 --
 Ozgur Akgun

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsec bug, or...?

2009-10-15 Thread Uwe Hollerbach
Hi, all, thanks for the further inputs, all good stuff to think
about... although it's going to be a little while before I can
appreciate the inner beauty of Doaitse's version! :-) I had considered
the approach of doing a post-parsec verification, but decided I wanted
to keep it all inside the parser, hence the desire to match prefixes
there (and lack of desire to write 'string p | string pr |
string pre ...'.

By way of background, the actual stuff I'm wanting to match is not
food names, but some commands for a small ledger program I'm working
on. I needed something like that and was tired of losing data to
quicken every so often. I realize of course that there are other
excellent ledger-type programs out there, but hey, I also needed
another hacking project. I'll put this onto hackage in a while, once
it does most of the basics of what I need. No doubt the main
differentiator between mine and those other excellent ledger programs
out there will be that mine has fewer features and more bugs...

thanks again, all!

Uwe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsec bug, or...?

2009-10-13 Thread Uwe Hollerbach
On 10/12/09, Martijn van Steenbergen mart...@van.steenbergen.nl wrote:
 Brandon S. Allbery KF8NH wrote:
 My fix would be to have myPrefixOf require the prefix be terminated in
 whatever way is appropriate (end of input, white space, operator?)
 instead of simply accepting as soon as it gets a prefix match regardless
 of what follows.

 Maybe you can use notFollowedBy for this.

 HTH,

 Martijn.



Yes, I've looked at that and am thinking about it. I'm not quite
certain it's needed in my real program... I seem to have convinced
myself that if I actually specify a proper set of unique prefixes, ie,
set the required lengths for both frito and fromage to 3 in the
test program, I won't get into this situation. Assuming I haven't
committed another brain-fart there, that would be sufficient;
presumably, in a real program one would want to actually specify the
unique prefix, rather than a non-unique pre-prefix. It seems to work
fine in my real program, anyway.

Uwe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Parsec bug, or...?

2009-10-12 Thread Uwe Hollerbach
a brain fart?

Hi, cafe, I've been playing a little bit with a small command
processor, and I decided it'd be nice to allow the user to not have to
enter a complete command, but to recognize a unique prefix of it. So I
started with the list of allowed commands, used filter and isPrefixOf,
and was happy. But then I increased the complexity a little bit and it
got hairier, so I decided to rewrite the parser for this bit in
parsec. The function I came up with is

parsePrefixOf n str =
  string (take n str)  opts (drop n str)  return str
  where opts [] = return ()
opts (c:cs) = optional (char c  opts cs)

which I call as

parseFoo = parsePrefixOf 1 foo

and it recognizes all of f, fo, and foo as foo.

OK so far, this also seems to work fine. But during the course of
writing this, I made a stupid mistake at one point, and the result of
that seemed odd. Consider the following program. It's stupid because
the required prefix of frito is only 2 characters, which isn't
enough to actually distinguish this from the next one, fromage. (And
if I change that to 2 to 3 characters, everything works fine.) So
here's the complete program

module Main where

import Prelude
import System
import Text.ParserCombinators.Parsec as TPCP

myPrefixOf n str =
  string (take n str)  opts (drop n str)  return str
  where opts [] = return ()
opts (c:cs) = optional (char c  opts cs)

myTest = myPrefixOf 1 banana
  | myPrefixOf 1 chocolate
  | TPCP.try (myPrefixOf 2 frito)
  | myPrefixOf 3 fromage

myBig = spaces  myTest = (\g - spaces  eof  return g)

parseTry input =
  case parse myBig test input of
   Left err - return (show err)
   Right val - return (success: ' ++ val ++ ')

main = getArgs = (\a - parseTry (a !! 0)) = putStrLn

If I compile this, say as program opry, and run it as shown below, I
expect the results I get for all but the last one:

% ./opry b
success: 'banana'

% ./opry c
success: 'chocolate'

% ./opry fr
success: 'frito'

% ./opry fri
success: 'frito'

% ./opry fro
test (line 1, column 3):
unexpected o
expecting i, white space or end of input

Sooo... why do I get that last one? My expectation was that  parsec
would try the string fro with the parser for frito, it would fail,
having consumed 2 characters, but then the TPCP.try which is wrapped
around all of that should restore everything, and then the final
parser for fromage should succeed. The same reasoning seems to me to
apply if I specify 3 characters as the required initial portion for
frito, and if I do that it does succeed as I expect.

So is this a bug in parsec, or a bug in my brain?

thanks... Uwe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsec bug, or...?

2009-10-12 Thread Uwe Hollerbach
On 10/12/09, Derek Elkins derek.a.elk...@gmail.com wrote:
 On Mon, Oct 12, 2009 at 9:28 PM, Uwe Hollerbach uhollerb...@gmail.com
 wrote:
 a brain fart?

 Hi, cafe, I've been playing a little bit with a small command
 processor, and I decided it'd be nice to allow the user to not have to
 enter a complete command, but to recognize a unique prefix of it. So I
 started with the list of allowed commands, used filter and isPrefixOf,
 and was happy. But then I increased the complexity a little bit and it
 got hairier, so I decided to rewrite the parser for this bit in
 parsec. The function I came up with is

 parsePrefixOf n str =
  string (take n str)  opts (drop n str)  return str
  where opts [] = return ()
opts (c:cs) = optional (char c  opts cs)

 which I call as

 parseFoo = parsePrefixOf 1 foo

 and it recognizes all of f, fo, and foo as foo.

 OK so far, this also seems to work fine. But during the course of
 writing this, I made a stupid mistake at one point, and the result of
 that seemed odd. Consider the following program. It's stupid because
 the required prefix of frito is only 2 characters, which isn't
 enough to actually distinguish this from the next one, fromage. (And
 if I change that to 2 to 3 characters, everything works fine.) So
 here's the complete program

 module Main where

 import Prelude
 import System
 import Text.ParserCombinators.Parsec as TPCP

 myPrefixOf n str =
  string (take n str)  opts (drop n str)  return str
  where opts [] = return ()
opts (c:cs) = optional (char c  opts cs)

 myTest = myPrefixOf 1 banana
  | myPrefixOf 1 chocolate
  | TPCP.try (myPrefixOf 2 frito)
  | myPrefixOf 3 fromage

 myBig = spaces  myTest = (\g - spaces  eof  return g)

 parseTry input =
  case parse myBig test input of
   Left err - return (show err)
   Right val - return (success: ' ++ val ++ ')

 main = getArgs = (\a - parseTry (a !! 0)) = putStrLn

 If I compile this, say as program opry, and run it as shown below, I
 expect the results I get for all but the last one:

 % ./opry b
 success: 'banana'

 % ./opry c
 success: 'chocolate'

 % ./opry fr
 success: 'frito'

 % ./opry fri
 success: 'frito'

 % ./opry fro
 test (line 1, column 3):
 unexpected o
 expecting i, white space or end of input

 Sooo... why do I get that last one? My expectation was that  parsec
 would try the string fro with the parser for frito, it would fail,
 having consumed 2 characters, but then the TPCP.try which is wrapped
 around all of that should restore everything, and then the final
 parser for fromage should succeed. The same reasoning seems to me to
 apply if I specify 3 characters as the required initial portion for
 frito, and if I do that it does succeed as I expect.

 So is this a bug in parsec, or a bug in my brain?

 Move the try to the last alternative.


No, that doesn't do it... I get the same error (and also the same if I
wrap both alternatives in try).

Uwe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to calculate the number of digits of an integer? (was: Is logBase right?)

2009-08-29 Thread Uwe Hollerbach
Ouch! That is indeed an improvement... I don't recall all the details
of this codelet, but I think I got the seed off the net somewhere
(perhaps this list?), and it might well have been better originally.
So, brightly brightly and with beauty, I probably executed a
verschlimmbesserung. After a year and a half, I find I still have
almost no intuition about performance issues in haskell... guess I
have to practice more.

Uwe

On 8/29/09, Bertram Felgenhauer bertram.felgenha...@googlemail.com wrote:
 Uwe Hollerbach wrote:
 Here's my version... maybe not as elegant as some, but it seems to
 work. For base 2 (or 2^k), it's probably possible to make this even
 more efficient by just walking along the integer as stored in memory,
 but that difference probably won't show up until at least tens of
 thousands of digits.

 Uwe

 ilogb :: Integer - Integer - Integer
 ilogb b n | n  0  = ilogb b (- n)
   | n  b  = 0
   | otherwise  = (up 1) - 1
   where up a = if n  (b ^ a)
   then bin (quot a 2) a
   else up (2*a)
 bin lo hi = if (hi - lo) = 1
then hi
else let av = quot (lo + hi) 2
 in if n  (b ^ av)
   then bin lo av
   else bin av hi

 We can streamline this algorithm, avoiding the repeated iterated squaring
 of the base that (^) does:

 -- numDigits b n | n  0 = 1 + numDigits b (-n)
 numDigits b n = 1 + fst (ilog b n) where
 ilog b n
 | n  b = (0, n)
 | otherwise = let (e, r) = ilog (b*b) n
   in  if r  b then (2*e, r) else (2*e+1, r `div` b)

 It's a worthwhile optimization, as timings on n = 2^100 show:

 Prelude T length (show n)
 301030
 (0.48 secs, 17531388 bytes)
 Prelude T numDigits 10 n
 301030
 (0.10 secs, 4233728 bytes)
 Prelude T ilogb 10 n
 301029
 (1.00 secs, 43026552 bytes)

 (Code compiled with -O2, but the interpreted version is just as fast; the
 bulk of the time is spent in gmp anyway.)

 Regards,

 Bertram
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to calculate de number of digits of an integer? (was: Is logBase right?)

2009-08-26 Thread Uwe Hollerbach
Here's my version... maybe not as elegant as some, but it seems to
work. For base 2 (or 2^k), it's probably possible to make this even
more efficient by just walking along the integer as stored in memory,
but that difference probably won't show up until at least tens of
thousands of digits.

Uwe

ilogb :: Integer - Integer - Integer
ilogb b n | n  0  = ilogb b (- n)
  | n  b  = 0
  | otherwise  = (up 1) - 1
  where up a = if n  (b ^ a)
  then bin (quot a 2) a
  else up (2*a)
bin lo hi = if (hi - lo) = 1
   then hi
   else let av = quot (lo + hi) 2
in if n  (b ^ av)
  then bin lo av
  else bin av hi

numDigits n = 1 + ilogb 10 n

[fire up ghci, load, etc]

*Main numDigits (10^1500 - 1)
1500
*Main numDigits (10^1500)
1501
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] following up on space leak

2009-07-11 Thread Uwe Hollerbach
Hi, George, thanks for the pointer, it led me to some interesting
reading. Alas, the problem which it solves was already solved, and the
unsolved problem didn't yield any further...

At this point, I've concluded that my interpreter just simply isn't
tail-recursive enough: in the Collatz test case I had originally
looked at and mentioned, it seems that no matter what I do the memory
usage stays the same. Initially, a significant portion of the usage
showed up as one particular function in the interpreter which applies
binary numerical operators to a list of numbers. It's a moderately
complex function, as it deals with any number of operands, and it
takes care of type conversions as well: if I add two integers, I want
the result to be an integer; if I add in a float, the result will be a
float, etc.

In my particular usage in this test case, it was only getting used to
increment an integer; so I simplified that, I added an incr function
to my interpreter and called that instead... now exactly the same
amount of memory usage shows up in the cost center labeled incr as
was previously being used in the more-complex numeric-binary-operator
function. I've cut down the interpreter to about a quarter of its
original size, now I've got a version that really is only useful for
running this Collatz test case, and... it uses exactly the same amount
of memory as before.

The last thing I tried before giving up was to try and make a
more-strict bind operator, I think I wrote that as

(!=) !m !k = m = k

with appropriate -XBangPatterns added to the compiler options. It
passed all the self-tests for the interpreter, so I'm pretty sure I
didn't do anything wrong, but it made no difference to the memory
usage. So for now I've shelved that problem, I'm looking instead at
adding proper continuations to the interpreter.

Uwe

On 7/7/09, George Pollard por...@porg.es wrote:
 I believe there might be an elegant solution for this using the `Last`
 monoid

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] following up on space leak

2009-07-05 Thread Uwe Hollerbach
On 7/5/09, Paul L nine...@gmail.com wrote:
 Previously you had lastOrNil taking m [a] as input, presumably
 generated by mapM. So mapM is actually building an entire list before
 it returns the argument for you to call lastOrNil. This is where you
 had unexpected memory behavior.

 Now you are fusing lastOrNil and mapM together, and instead of
 building a list, you traverse it and perform monadic action along the
 way. This can happen in a constant memory if the original pure list is
 generated lazily.

 I think the real problem you had was a mis-understanding of mapM, and
 there was nothing wrong with your previous lastOrNil function. mapM
 will only return a list after all monadic actions are performed, and
 in doing so, it inevitably has to build the entire list along the way.

 --
 Regards,
 Paul Liu

 Yale Haskell Group
 http://www.haskell.org/yale

Hi, Paul, thanks for the comments. You're quite right that I am fusing
the two functions together, but I think I wasn't mis-understanding
mapM... I knew I was generating the entire list, and aside from the
slight inefficiency of generating it only to tear it down an instant
later, that would have been no problem. But I was expecting all of the
memory associated with the list to be reclaimed after I had processed
it, and that was what was not happening as far as I could tell. (This
isn't one monolithic list, by the way; it's the small bodies of a
couple of small scheme functions that get evaluated over and over. So
the setup and teardown happens a lot.) I don't have very good
intuition yet about what should get garbage-collected and what should
get kept in such situations, and in fact I'm kind of in the same boat
again: the test case now runs much better, but it still leaks memory,
and I am again stumped as to why. Could I see something useful by
examining ghc core? I haven't looked at that yet, no idea what to look
for...

Uwe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] following up on space leak

2009-07-05 Thread Uwe Hollerbach
On 7/5/09, Alexander Dunlap alexander.dun...@gmail.com wrote:
 On Sun, Jul 5, 2009 at 7:46 PM, Uwe Hollerbachuhollerb...@gmail.com
 wrote:
 On 7/5/09, Paul L nine...@gmail.com wrote:
 Previously you had lastOrNil taking m [a] as input, presumably
 generated by mapM. So mapM is actually building an entire list before
 it returns the argument for you to call lastOrNil. This is where you
 had unexpected memory behavior.

 Now you are fusing lastOrNil and mapM together, and instead of
 building a list, you traverse it and perform monadic action along the
 way. This can happen in a constant memory if the original pure list is
 generated lazily.

 I think the real problem you had was a mis-understanding of mapM, and
 there was nothing wrong with your previous lastOrNil function. mapM
 will only return a list after all monadic actions are performed, and
 in doing so, it inevitably has to build the entire list along the way.

 --
 Regards,
 Paul Liu

 Yale Haskell Group
 http://www.haskell.org/yale

 Hi, Paul, thanks for the comments. You're quite right that I am fusing
 the two functions together, but I think I wasn't mis-understanding
 mapM... I knew I was generating the entire list, and aside from the
 slight inefficiency of generating it only to tear it down an instant
 later, that would have been no problem. But I was expecting all of the
 memory associated with the list to be reclaimed after I had processed
 it, and that was what was not happening as far as I could tell. (This
 isn't one monolithic list, by the way; it's the small bodies of a
 couple of small scheme functions that get evaluated over and over. So
 the setup and teardown happens a lot.) I don't have very good
 intuition yet about what should get garbage-collected and what should
 get kept in such situations, and in fact I'm kind of in the same boat
 again: the test case now runs much better, but it still leaks memory,
 and I am again stumped as to why. Could I see something useful by
 examining ghc core? I haven't looked at that yet, no idea what to look
 for...

 Uwe
 ___

 mapM_ might be useful to you. I know there are cases where mapM leaks
 memory but mapM_ doesn't, basically because mapM_ throws away all of
 the intermediate results immediately. You might want to condition on
 nullness of the list and then mapM_ your function over the init of the
 list and then just return the function on the last element of the
 list.

 Alex


Oh, sorry, I was not clear in my original note in this thread: the
lastOrNil issue seems to be solved. That part of the code is, as far
as I can tell, not leaking memory at all anymore. I think I can claim
that because now the constant memory allocation is showing up visibly
in the profiling output; before, it was lost in the noise. So, if
there is a leak there, it's tiny compared with the constant stuff at
least for this benchmark. There are still two or perhaps three leaks,
and these show up as large but not huge compared to the constant
stuff. I've got a plot of this up on the haskeem website:
http://www.korgwal.com/haskeem/run_new.png.

The bits where I am stumped now are two-fold: one is (I think)
analogous to the lastOrNil issue, except that instead of feeding the
result to lastOrNil, I am doing a more general fold. So there I do
need all the results. I tried the same fusion as with lastOrNil/mapML,
and as far as I can tell I'm not building any lists; but this time it
didn't change the behavior at all, other than causing the names of
some profiling cost centers to change. This is the #2 entry on the
plot above.

The other issue seems to have something to do with IORefs, I'm
dynamically building environments for my scheme functions, and somehow
there seems to be something going wrong with reclaiming that memory
after it's done. This is the #1 and I think #3 entry on the plot. I
don't know enough details there yet to be able to say any more.

Uwe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] following up on space leak

2009-07-04 Thread Uwe Hollerbach
Good evening, all, following up on my question regarding space leaks,
I seem to have stumbled across something very promising. I said I was
using this tiny function lastOrNil to get the last value in a list,
or the empty (scheme) list if the haskell list was empty. The uses of
it were all of the form

lastOrNil (mapM something some list)

so I wrote a different function mapML to do this directly:

 mapML fn lst = mapMLA (List []) fn lst
   where mapMLA r _ [] = return r
 mapMLA ro fn (x:xs) =
do rn - fn x
   mapMLA rn fn xs

This isn't an accumulator, it's a replacer (or, if you like, the
accumulation is drop the old one on the floor), it starts out with
the scheme empty list that I want as the default, and it never even
builds the list which it'll just dump an instant later. Shazam! Memory
usage dropped by roughly an order of magnitude in my little Collatz
benchmark, and incidentally runtime improved by 25% or so as well. The
horror! :-)

Having tasted blood, I will of course be continuing to benchmark...
but not tonight.

Uwe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] following up on space leak

2009-07-04 Thread Uwe Hollerbach
On 7/4/09, Marcin Kosiba marcin.kos...@gmail.com wrote:
 On Saturday 04 July 2009, Uwe Hollerbach wrote:
 Good evening, all, following up on my question regarding space leaks,
 I seem to have stumbled across something very promising. I said I was
 using this tiny function lastOrNil to get the last value in a list,
 or the empty (scheme) list if the haskell list was empty. The uses of
 it were all of the form

 lastOrNil (mapM something some list)

 so I wrote a different function mapML to do this directly:
  mapML fn lst = mapMLA (List []) fn lst
where mapMLA r _ [] = return r
  mapMLA ro fn (x:xs) =
 do rn - fn x
mapMLA rn fn xs

 This isn't an accumulator, it's a replacer (or, if you like, the
 accumulation is drop the old one on the floor), it starts out with
 the scheme empty list that I want as the default, and it never even
 builds the list which it'll just dump an instant later. Shazam! Memory
 usage dropped by roughly an order of magnitude in my little Collatz
 benchmark, and incidentally runtime improved by 25% or so as well. The
 horror! :-)

 Hi,
   IMHO expressing mapML using StateT would be a bit cleaner ;)

 mapML :: (Monad m) = (a - m List) - [a] - m List
 mapML fn lst = execStateT mapMLAs (List [])
   where
 mapMLAs  = sequence_ $ map mapMLA lst
 mapMLA x = (lift $ fn x) = put

 --
 Marcin Kosiba

Yeah, I'm sure there are more-elegant ways to write this, I'm still
very much a beginner in haskell. I'm just very thrilled by the
reduction in memory usage!

Uwe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] space leak hints?

2009-07-03 Thread Uwe Hollerbach
Good evening, all, I wonder if I could tap your collective wisdom
regarding space leaks? I've been messing about with haskeem, my little
scheme interpreter, and I decided to see if I could make it run
reasonably space-efficiently. So far... no.

Here's what I tried: I wrote a tiny scheme program to compute Collatz
sequences for successive numbers, starting from 1 and incrementing
forever (well, in principle). Because real scheme implementations are
fully tail-call-optimized, this'll run in constant memory; I checked
that with mzscheme, and it does indeed work. With my little
interpreter, that's not the case: memory usage grows continually,
although apparently less-than-linearly. I've built the interpreter
with the profiling stuff described on the wiki and in RWH Ch 25 turned
on and have made a few runs with that; I stuck the postscript plot
that's the result of one of those runs onto my web site at
http://www.korgwal.com/haskeem/run2.ps.

The full source to the interpreter is a little large to paste into
this message; it's available on my web site, and also on hackage. But
according to the plot, there appear to be three main memory allocation
issues, and they seem to all be related, if I'm reading stuff
correctly. The core of the interpreter is a function, evalLisp, which
evaluates scheme forms. There are of course a fair number of different
forms, but the largest generic usage is evaluate each of a list of
forms, returning the value of the last of them as the overall result.
In order to express that in a couple of different places, and to
accomodate the possibility of an empty list, I have a really tiny
function lastOrNil which just calls last on a (haskell) list,
checking for the possibility of an empty list, and returning a haskeem
LispVal object:

 lastOrNil = liftM lON
   where lON [] = List []
lON l = last l

(sorry, proportional fonting may be throwing this off).

It's this function which is directly implicated in two of the top
three memory pigs, and nearly directly in the third one as well. If I
could eliminate the memory growth in these three cost centers, I would
already capture over 90% of the growth in this benchmark, which would
make me very happy indeed. But I don't really understand what is going
on here. It seems entirely plausible, indeed likely, that the list
which I'm traversing there is not fully evaluated. So I've tried
adding 'seq' to this function. Uhhh... from memory, I dumped it after
it didn't work, I had

 lastOrNil = liftM lON
   where lON [] = List []
   lON (l:[]) = l
   lON (l:ls) = seq l (lON ls)

(again, proportional fonting might mess me up here.)

As I said, I dumped this after it made no difference whatsoever. I
also tried bang-patterns in a couple of places, strictness annotation
of my basic LispVal types... nothing. It all made exactly no
difference, as far as I could tell. I've tried a couple of google
searches, but haven't come up with anything better than what I've
already described. So I'm a stumped chump!

I'd be grateful for any suggestions... thanks in advance!

Uwe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


build question: ghc 6.11.20090605 - ghc 6.10.3?

2009-06-07 Thread Uwe Hollerbach
Hi, all, is it expected that a snapshot version (I'm using 2009.06.05)
should be able to build the released version 6.10.3? This weekend I
tried the porting to an unsupported machine procedure with the
aforementioned snapshot version; that worked, after some fussing, and
I seem to have a working ghc install, including several libraries and
cabal. But I tried building the Haskell Platform, and it warns
(rightly, as it turns out) about this being an unsupported version of
ghc. So I'd like to get onto the regular release train, as it were,
and transfer myself to ghc 6.10.3.

But that fails to build, after a while. Specifically, it goes until
it's trying to compile the stage1 version of ghc/Main.hs, and then it
complains that it can't find the file ../includes/ghcautoconf.h.
Now, that file does exist; but it looks like it's being referenced
from HsVersions.h, which lives in the lib/ tree of the installed
snapshot compiler, and relative to the location of that file, there is
indeed no such path. Is that a bug? If I create the includes directory
where HsVersions.h appears to be seeking it, and add in ghcautoconf.h
from where it got created during the setup of ghc 6.10.3, the build
proceeds (a little) further, but it still dies.

thanks... Uwe

PS I suppose I should describe in a little bit of detail what I did
for the porting, might be useful for someone else trying this. It was
a rather vanilla port, x86_64-unknown-linux to same; my target machine
is my workstation at work, where I don't have root, and I do have
permission from IT to install this stuff, but no support... so they
won't install rpms for me, or anything like that. It's also a slightly
old and somewhat customized version of some RedHat corporate release,
so recent plain-vanilla tarballs didn't work... hence the need to
port.

So: first problem was that my target machine identified itself as
x86_64-unknown-linux-gnu; note the 4-part identifier, rather than the
3-part that's more usual. That confused the configure scripts... I had
to hack up configure.ac to force it to see itself as
x86_64-unknown-linux. First problem solved...

After that, the back-and-forth of the various derived files was
tedious but straightforward, and stuff built well. The second problem
was that the command listed on the ghc porting page, for c in
libraries/*/configure; ..., is specific to bash or sh, and my
workstation runs tcsh. No problem, I knew what to do, but it might be
an issue for someone who doesn't know.

The third issue was the sed syntax on the same page: the version of
sed that I was using didn't like the space between the -i and the
.bak. I don't know if that's a variation from one version of sed to
another, or a typo on the page, or??? Again no big deal, the error
message from sed was informative enough for me to figure out what to
do.

The fourth issue was that the make all_ghc_stage2 command failed:
the linker complained that it couldn't find the hs_main symbol. I
found that in RtsMain.o, and manually added that object file to the
appropriate (?) library; then it worked. I'm not actually sure that I
got the actually-most-correct library, but... it didn't seem to break
anything.

After that, it went through ok. I checked that that stage2 compiler
could indeed build a working executable from haskell sources, that was
ok, and then I rebuilt that snapshot from scratch using that compiler.
That worked fine, and after that the various libraries needed to get
cabal up and running were straightforward.
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


[Haskell-cafe] Announce: haskeem 0.7.0 uploaded to hackage

2009-06-06 Thread Uwe Hollerbach
Hi, all, a little while ago I uploaded haskeem 0.7.0 to hackage: this
is my small scheme interpreter. I had been busy with other important
stuff for a while, and hadn't worked on it for a while; but I've now
updated it to build ok with ghc 6.10.3 + haskeline, plus I added a
simple macro system; that had been one of the two big items left on my
list for it. It's not yet full R6RS Scheme hygienic macros, but I'll
get there, too...

I had kind of a functional programming moment while doing this macro
system... I had been thinking and reading about it for, oh, a couple
of weeks now, and today I sat down to start that, with the hope of
maybe finishing it in a week or two. Nope! All in all, it took about
45 minutes, with the really important changes to the code spanning,
oh, call it a generous three lines of code. There were changes in
50ish other lines, but those were all pretty trivial, just adding
another data type. Such a let-down! :-) :-) :-)

There is one small regression in all of this, namely that with
haskeline I have lost the ability to interrupt the interpreter and
land back at the lisp prompt. I can interrupt it all right, but I
end up at the shell prompt... not so good! I know all that stuff can
be set up to interrupt a little less thoroughly than that, I just
haven't got around to it yet... soon, I hope.

Uwe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] (Pre-) Announce: Data.GDS 0.1.0

2009-06-01 Thread Uwe Hollerbach
Hello, all,

I'm hereby announcing Data.GDS, a small module to write and
(eventually -- that's part of the pre) read GDS files. For those of
you not in the semiconductor biz, GDS-II is one of the classic formats
of the industry. It's perhaps ever so slightly obsolete at this point,
as the OASIS format is in the process of displacing it, but there are
still huge numbers of designs in GDS format, and lots and lots of
tools deal with it.

Since I'm a sad sick weirdo(*), I spent a perfectly nice  sunny
NorCal day hacking up this initial version of this module. It is to
the point where it can generate a GDS file of your devising, although
your specification of it still has to be at a very low level. It would
be, and eventually will be, nicer to specify things at a higher level
of abstraction. Also, it will eventually be nice to be able to read
GDS files, returning an array of GDSRecord. I know how to do that, and
I plan to, but I haven't got there yet.

This ought to already be properly cabalized, and there's a small test
program included; run it, save the output somewhere, and compare that
with the sample GDS file which I also included in the tarball. If you
examine the GDS file itself, you will see that, although it is small,
it does in fact contain vital bits of design which will no doubt
enable the biz to continue Moore's law for at least another century or
so.

Once I've implemented the reader, I'll upload this to hackage; in the
meantime, if any of you are especially  interested in what the rest of
the interface to this should look like, I'm happy to hear your
suggestions!

Uwe

(*) In point of fact, I am neither sad nor sick; I am in fact mostly
happy  healthy. The reason I wasn't out taking a long walk today was
because, alas, I dinged one achilles tendon a few days ago, and wanted
to let it heal a bit... as to the weirdo charge, I beg you, gentle
readers, avert your eyes while I plead no contest! :-)


gds-0.1.0.tar.gz
Description: GNU Zip compressed data
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Runge-Kutta and architectural style

2009-05-02 Thread Uwe Hollerbach
Hi, Richard, these are interesting suggestions, I may explore them a bit.

I tried initially to make something that would be usable without too
much pain for small-to-medium problem, and that could be used, albeit
with a performance hit, for a larger problem; but I'm sure I am
nowhere near what could be achieved for a larger problem. On the other
hand, it's quite possible that evaluation of the derivative function
would dominate, so that the time spent in the actual RK code would be
negligible; in that case, this might already be as good as it needs to
be.

Purely for my own interest, I doubt whether I'll make this public
unless someone specifically asks, I'm playing with a code generator
from this, that will try to generate good code from these tables --
but it's C code so far, not haskell. This isn't really a practical
thing, if I needed the ultimate C routine, it would have already been
faster and simpler to just write it directly instead of the
code-generation stuff I've been doing, but hey...

Uwe

On 4/26/09, Richard O'Keefe o...@cs.otago.ac.nz wrote:
 I was interested to see a Runge-Kutta package
 posted to this list recently, particularly as
 I have a fairly simple-minded non-adaptive RK
 generator: an AWK script that takes a table
 and some optional stuff and spits out C.  The
 Haskell package is, of course, a lot prettier
 than my AWK program, as well as offering some
 adaptive methods, which is important.

 We can imagine a spectrum of RK packages:

 (1) Higher order function taking some runtime
  parameters.  That's what we got.

 (2) The same specialised for a known table at
  compile time.  Doable using the code that
  we were given plus SPECIALIZE pragmas.  I
  don't how well that works across modules.

 (3) A generator that writes Haskell source.

 (4) Template Haskell.

 (5) A generator that generates native code to
  be called through FFI.

 The question is, how do you decide what's the
 appropriate approach?


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: Runge-Kutta library -- solve ODEs

2009-04-21 Thread Uwe Hollerbach
That sounds fine... I think I'll pick Numeric. RungeKutta. I'll
change it when I've cabalized  hackage-ized this puppy...

Uwe

On 4/20/09, Alexander Dunlap alexander.dun...@gmail.com wrote:
 It would also be nice if you could plug it into the hierarchical
 module system somewhere, perhaps renaming the module to
 Data.Algorithm.RungeKutta or Numeric.RungeKutta or
 Math.RungeKutta. This is pretty much the standard practice now, I
 think.

 Alex
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANNOUNCE: Runge-Kutta library -- solve ODEs

2009-04-19 Thread Uwe Hollerbach
Hello, all, I'm pleased to announce a small Runge-Kutta library for
numerically solving ordinary differential equations, which I'm hereby
unleashing upon an unsuspecting world. The README is as follows:

This is a small module collecting about a dozen Runge-Kutta methods
of different orders, along with a couple of programs to exercise them.

Build and run testrk, volterra, volterra2, and arenstorf:

o   testrk exercises all of the methods in a non-adaptive way,
solving a test problem with a known analytic solution,
to check convergence. (This was what first indicated that
there was a problem with the Fehlberg 7(8) listing in HNW.)

o   volterra uses a non-adaptive method to solve the Lotka-Volterra
equations from t=0 to t=40: either from a built-in starting point,
or from a starting point specified on the command line.

o   volterra2 does the same, except it uses an adaptive solver

o   arenstorf solves the restricted 3-body problem (earth+moon+satellite)
using an adaptive solver with some specific initial conditions
which yield periodic orbits

The volterra2 and arenstorf examples use an oracle function to
decide what is a good step size. Right now that oracle function is in
each test file; arguably it should be in the RungeKutta module.
Eventually it will be, but I haven't spent much time yet on making
that oracle especially good.

I have so far only tested it with ghc 6.8.3 on MacOS 10.3.9 (powerPC),
but I know of no reason why it wouldn't work with other versions and
OSs.

It's BSD licensed, in fact I stole the LICENSE file from ghc (and
filed off the serial numbers).

I'm afraid I haven't messed with cabal much yet, so it's not
cabalized; neither is it uploaded to hackage, as I have no account
there. If someone wants to do either of those, please feel entirely
free to do so.

In addition to attaching a tarball to this message, I'm also putting
this onto my (sadly neglected) web site: it will live at
http://www.korgwal.com/software/rk-1.0.0.tar.gz.

Enjoy!

Uwe


rk-1.0.0.tar.gz
Description: GNU Zip compressed data
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: Runge-Kutta library -- solve ODEs

2009-04-19 Thread Uwe Hollerbach
Thanks, Martijn, glad to hear it's working for you too. I'll see what
I can do about cabal+hackage...

Uwe

On 4/19/09, Martijn van Steenbergen mart...@van.steenbergen.nl wrote:
 Hi Uwe,

 Uwe Hollerbach wrote:
 I have so far only tested it with ghc 6.8.3 on MacOS 10.3.9 (powerPC),
 but I know of no reason why it wouldn't work with other versions and
 OSs.

 It works fine on 6.10.1 on Leopard Intel as well.

 I'm afraid I haven't messed with cabal much yet, so it's not
 cabalized; neither is it uploaded to hackage, as I have no account
 there. If someone wants to do either of those, please feel entirely
 free to do so.

 It's really easy to do this, and I strongly suggest you do. To save you
 some work, here's a cabal file you can use that directly allows you to
 'cabal install' your package:

 (filename: rungekutta.cabal)

 Name: rungekutta
 Version: 0.1

 Cabal-Version: = 1.2
 Build-Type: Simple
 License: BSD3
 License-File: LICENSE

 Library
   Build-Depends: base  5
   Exposed-Modules: RungeKutta

 All you need to do is find a nice category for it (with matching module
 name), change the cabal file accordingly, create an account at hackage
 (see http://hackage.haskell.org/packages/accounts.html) and upload it.

 Hope this helps,

 Martijn.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: Runge-Kutta library -- solve ODEs

2009-04-19 Thread Uwe Hollerbach
Damn, I never thought of that. Sorry! Guess I've only been
haskell-hacking on my little old iMac...

A new version is attached, with the version number bumped up by 0.0.1
and rungekutta.hs renamed to RungeKutta.hs. Hopefully it should work
out-of-the-box now. Off to patch my repository...

Uwe

On 4/19/09, Alexey Khudyakov alexey.sklad...@gmail.com wrote:
 I have so far only tested it with ghc 6.8.3 on MacOS 10.3.9 (powerPC),
 but I know of no reason why it wouldn't work with other versions and
 OSs.

 It breaks on Linux (and all unices) because name of file in tarball is
 'rungekutta.hs' not
 'RungeKutta.hs' as required. In other words: god blame
 case-insensitive file systems.

 After renaming everything works just fine.
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



rk-1.0.1.tar.gz
Description: GNU Zip compressed data
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Can't compile GHC 6.8.2

2008-12-21 Thread Uwe Hollerbach
On 12/19/08, Simon Marlow marlo...@gmail.com wrote:
 lupus:~/ghc-6.8.3% ghc-6.8.3 -v
 dyld: relocation error (external relocation for symbol
 _pthread_mutex_unlock
 in ghc-6.8.3 relocation entry 0 displacement too large)Trace/BPT trap

 Failure! ... or is it?

 I'd guess that the size of the binary has caused some kind of overflow of a
 short relocation field.  Any experts in MacOS linking around?

 You might want to try the testsuite with stage1 and see whether the failure
 shows up anywhere else.

 Cheers,
   Simon

Thanks, I will try that and see what I can find out.

regards,
Uwe
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Can't compile GHC 6.8.2

2008-12-16 Thread Uwe Hollerbach
Hello Barney  all, I've been following your messages with some
interest, as I too have been trying to build a more-modern ghc on my
G3 iMac running 10.3.9. I started with an existing 6.6.1 build, and
tried to build 6.8.3; I'm finally at the point where I have something
to report, although I'm not sure if it's a success or a failure
report... :-/

I applied your patch to package.conf.in (approximately; the relevant
section wasn't quite identical), edited rts/Linker.c to add #define
LC_SEGMENT_64 LC_SEGMENT, and hacked up a wrapper script around ar to
always ranlib the library being processed; you had mentioned patching
cabal, but I decided the wrapper around ar was easier... a hack, but
what the hell.

After that, configure  make ran to completion without errors
(although it took a couple of days, since I had all the extralibs).

Success! ... or is it?

I installed the new compiler into /usr/local, then tested it by trying
ghc -v. Alas, no joy! It died with some dynamic-link error which
I've approximately reproduced here:

lupus:~/ghc-6.8.3% ghc-6.8.3 -v
dyld: relocation error (external relocation for symbol _pthread_mutex_unlock
in ghc-6.8.3 relocation entry 0 displacement too large)Trace/BPT trap

Failure! ... or is it?

I thought, how can this be?!? It built itself through stage2, it has
to be good! But clearly it isn't... So I tried one last thing: I tried
to use the stage1 compiler directly to compile the scheme interpreter
I wrote nearly a year ago. That initially failed, too, but for a
simple reason, and one I could work around: no readline in ghc 6.8.3.
Once I changed the scheme interpreter to not use readline, it
compiled, linked, and runs.

So... success or failure? I'm really not quite sure... I guess I could
try installing the stage1 compiler instead of the stage2 compiler, it
seems that it might work. But it would appear that there is still
something not entirely right in there.

regards,
Uwe
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] build recent(ish) ghc on macos 10.3.9 powerpc?

2008-03-18 Thread Uwe Hollerbach
Hello, haskellers, a few days ago I had asked about building recent
ghc on macos 10.3.9. I have made a bit of progress along those lines,
here's a small update on what worked and what didn't. Instead of
trying to proceed with the porting procedure, I went back to an
install: I ended up going to 6.2.2. After a bit of fiddling, I found
that another bit of messing-about I had done to the machine was
messing me up: I had upgraded gcc from 3.3.x to 3.4.6 by just
downloading the sources and building. That produced a working gcc
3.4.6, but apparently the apple-aware ghc 6.2.2 and the apple-unaware
gcc 3.4.6 didn't play nicely together. Once I disabled gcc 3.4.6,
stuff worked again.

Then I tried building ghc 6.4.2 using the installed 6.2.2. This took a
lot of time, the compiler was able (as far as I could tell) to go
through all of its own bootstrapping, but after I happily installed
what I thought was a working ghc 6.4.2, the installed version was
unable to do anything: it segfaulted while trying to do ghc
--version. Oops... Out of somewhat morbid curiosity, I tried running
that inside the debugger, but gave up after several hours of elapsed
time. Back to the drawing board.

So I tried jumping directly from 6.2.2 to 6.6.1; that is, using 6.2.2
to build 6.6.1. That took another lot of time, but I'm happy to report
that it has produced a working executable: it did proceed all the way
through its bootstrapping, I verified that the executable in
compiler/stage2 was able to run ghc --version in-place, and after I
installed it, it was still able to do that; and (extra bonus! :-) )
it's also able to compile and link at least one small Haskell program
written by me! Cool!

I'm going to keep going forward and try to build 6.8.2, and I'll
report back if that succeeds, but this is already pretty good
progress...

regards, Uwe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] build recent(ish) ghc on macos 10.3.9 powerpc?

2008-03-16 Thread Uwe Hollerbach
Thanks for the tip, Judah, I went and got the ghc 6.4.1 package...
unfortunately, it installs ok, but isn't able to compile anything: I
get Unknown pseudo-op: .subsections_via_symbols when I try to build
a trivial test program. I found one note on the web that says can't
work, needs newer version of XCode than comes with OSX 10.3, and
another note that says you have to install all of XCode. So I'm
going to see if I missed installing some portion of XCode (I don't
think so, but it was a while ago), but otherwise I might just be out
of luck :-(

Uwe

On 3/15/08, Judah Jacobson [EMAIL PROTECTED] wrote:
 On Sat, Mar 15, 2008 at 2:04 PM, Uwe Hollerbach [EMAIL PROTECTED] wrote:
  
Hi, all, I have an old iMac G3, running OSX 10.3.9, to which I have a
sentimental attachment. I'd like to get ghc running on it, but the
pre-built binaries I can find are all for more-recent iMacs, so I
thought I would try to build it myself. I believe I read somewhere
that gcc 3.3.X didn't work quite right for recent ghc -- I'm trying
for now to build ghc 6.6.1 -- so I started by upgrading gcc to 3.4.6.
That's working. So, with that in place, I went to the porting ghc to
a new arch page and started going through the steps. I'm using a
laptop running linux as the host computer, so that's
i386-unknown-linux, some Fedora core derivative. It's using gcc 3.4.4.
  


 It looks like ghc 6.4.1 had an installer package for 10.3.9; does that
  work for you?
  http://www.haskell.org/ghc/download_ghc_641.html

  I think that the current version of ghc is supposed to be buildable
  with 6.4, so you might be able to bootstrap 6.6 or 6.8 that way,
  without going through the whole porting process.  Let us know if you
  run into problems with it.

  Hope that helps,

 -Judah

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] build recent(ish) ghc on macos 10.3.9 powerpc?

2008-03-15 Thread Uwe Hollerbach
[Augh! gmail won't send! Apologies if this shows up more than once...]

Hi, all, I have an old iMac G3, running OSX 10.3.9, to which I have a
sentimental attachment. I'd like to get ghc running on it, but the
pre-built binaries I can find are all for more-recent iMacs, so I
thought I would try to build it myself. I believe I read somewhere
that gcc 3.3.X didn't work quite right for recent ghc -- I'm trying
for now to build ghc 6.6.1 -- so I started by upgrading gcc to 3.4.6.
That's working. So, with that in place, I went to the porting ghc to
a new arch page and started going through the steps. I'm using a
laptop running linux as the host computer, so that's
i386-unknown-linux, some Fedora core derivative. It's using gcc 3.4.4.

I think I did all the initial stuff right, but in the section in the
porting guide where it says to start diving into all the
subdirectories and do a bunch of make boot  make, I start getting
errors, and by the time it says to do that in .../libraries, it just
croaks.

Did I miss something earlier? Is this a completely hopeless endeavor
from the get-go? I'm not even sure what intermediate files or error
messages I should post. Any hints appreciated!. If you want to tell
me, Uwe, you're a bozo, that's fine, too, and you could well be
right, just tell me why... Thanks!

Uwe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how to catch keyboard interrupts?

2008-02-24 Thread Uwe Hollerbach
Thanks, I'll try that. -- Uwe

On 2/24/08, Ryan Ingram [EMAIL PROTECTED] wrote:
 This is pretty cool, but I have one warning:

  On Sat, Feb 23, 2008 at 4:37 PM, Uwe Hollerbach [EMAIL PROTECTED] wrote:
 data MyInterrupt = MyInt Int
 instance Typeable MyInterrupt where
   typeOf x = typeOf (0 :: Int)

  I am pretty sure that this makes Dynamic unsound; you could
  accidentally cast from an Int to a MyInterrupt or vice versa.  Try
  this instead:

   data MyException = Interrupt deriving Typeable

  then you can safely use throwDyn and catchDyn on this type.


-- ryan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] how to catch keyboard interrupts?

2008-02-23 Thread Uwe Hollerbach
Hi, all, I am continuing to mess with my little scheme interpreter,
and I decided that it would be nice to be able to hit control-C in the
middle of a long-running scheme computation to interrupt that and
return to the lisp prompt; hitting control-C and getting back to the
shell prompt works, but is a little drastic. So I looked at
System.Posix.Signals and after a bit of messing about got the
following:

 mysighandler =
   Catch (do hPutStrLn stderr caught a signal!
 fail Interrupt!)

 runREPL :: IO ()
 runREPL =
   do getProgName = writeHdr
  env - setupBindings [] True
  runInit env
  installHandler sigINT mysighandler Nothing
  installHandler sigQUIT mysighandler Nothing
  doREPL env

This compiles just fine, the interpreter runs as usual... but the
added code doesn't seem to do anything. You can probably guess
already... the print statement in mysighandler is there to see if it
actually caught a signal. It does: I see caught a signal! just fine,
in fact I see dozens of them as I lean on the control-C; but now my
scheme calculation doesn't get interrupted at all! I see in the
System.Posix.Signals documentation that the signal handler gets
invoked in a new thread; is this the source of the problem? If so,
what should I do to fix it? I'm afraid that sort of stuff is still
beyond my haskell-fu...

many thanks!

Uwe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how to catch keyboard interrupts?

2008-02-23 Thread Uwe Hollerbach
Thanks, Bulat, I'll look into this!

On 2/23/08, Bulat Ziganshin [EMAIL PROTECTED] wrote:
 Hello Uwe,


  Saturday, February 23, 2008, 11:35:35 PM, you wrote:

   mysighandler =
 Catch (do hPutStrLn stderr caught a signal!
   fail Interrupt!)


  scheme calculation doesn't get interrupted at all! I see in the
   System.Posix.Signals documentation that the signal handler gets
   invoked in a new thread; is this the source of the problem?


 yes, fail kills only this thread :)

  you should store thread id of thread running interpreter and send
  async exception to it. control.concurrent is probably contains all
  required functions



  --
  Best regards,
   Bulatmailto:[EMAIL PROTECTED]


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how to catch keyboard interrupts?

2008-02-23 Thread Uwe Hollerbach
On 2/23/08, Bulat Ziganshin [EMAIL PROTECTED] wrote:

[about my question about keyboard interrupts]

  you should store thread id of thread running interpreter and send
  async exception to it. control.concurrent is probably contains all
  required functions

Most splendid! Here's what I did

 data MyInterrupt = MyInt Int
 instance Typeable MyInterrupt where
   typeOf x = typeOf (0 :: Int)

 catcher :: MyInterrupt - IO ()
 catcher e = hPutStrLn stderr interrupt!

then later, in the REPL

 catchDyn (evalAndPrint env True line)  (\e - catcher e)

and in the initialization

 mysighandler tid = Catch (throwDynTo tid (MyInt 0))

and

 myTID - myThreadId
 installHandler sigINT (mysighandler myTID) Nothing
 installHandler sigQUIT (mysighandler myTID) Nothing
 doREPL env

I had to add the MyInterrupt stuff because GHC was complaining about
ambiguous types, initially I had just (\e - hPutStrLn stderr (show
e)) as the second arg of the catchDyn.

And try it in the self-test...

Now test number/string conversions... this'll take a bit longer

interrupt!
lisp

It works! Cool! This is worth bumping the version number :-)

Many thanks!

Uwe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] fast integer base-2 log function?

2008-02-15 Thread Uwe Hollerbach
Yes, I suspect you are right. I didn't look into that in much detail,
although I did try exchanging (2 ^ 5000) with (1 `shiftL` 5000);
but that didn't make any difference.

Uwe

On Fri, Feb 15, 2008 at 9:21 AM, Ryan Ingram [EMAIL PROTECTED] wrote:

 On Thu, Feb 14, 2008 at 8:23 PM, Uwe Hollerbach [EMAIL PROTECTED]
 wrote:
   Stefan's routine is, as expected, much much faster still: I tested the
   first two routines on numbers with 5 million or so bits and they took
   ~20 seconds of CPU time, whereas I tested Stefan's routine with
   numbers with 50 million bits, and it took ~11 seconds of CPU time.

 This seems wrong to me; that routine should take a small constant
 amount of time.  I suspect you are measuring the time to construct the
 50-million bit numbers as well.  If you constructed a single number
 and called this routine on it several times I am sure you would get
 far different results, with the first routines taking ~7-11s each and
 Stefan's GHC/GMP-magic taking almost nothing.

  -- ryan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] fast integer base-2 log function?

2008-02-14 Thread Uwe Hollerbach
Hi, all, a few days ago I had asked about fast integer log-2 routines,
and got a couple of answers. I've now had a chance to play with the
routines, and here's what I found. Initially, Thorkil's routine taken
from the Haskell report was about 30% or so faster than mine. When I
replaced the calls to my routine powi with native calls to ^, my
routine became about 10% faster than Thorkil's routine. (I undoubtedly
had some reason for using my own version of powi, but I have no idea
anymore what that reason was... :-/ )

I initially thought that that speed difference might be due to the
fact that my routine had base 2 hard-wired, whereas his routine is for
general bases, but that seems not to be the case: when I modified my
version to also do general bases, it stayed pretty much the same. I
didn't do enough statistics-gathering to really be absolutely
positively certain that my routine is indeed 10.000% faster, but there
did seem to be a slight edge in speed there. Here's the latest
version, in case anyone's interested. I had previously had effectively
a bit-length version; since I made it general base-b I changed it to a
log function.

 ilogb :: Integer - Integer - Integer
 ilogb b n | n  0  = ilogb b (- n)
   | n  b  = 0
   | otherwise  = (up b n 1) - 1
   where up b n a = if n  (b ^ a)
   then bin b (quot a 2) a
   else up b n (2*a)
 bin b lo hi = if (hi - lo) = 1
  then hi
  else let av = quot (lo + hi) 2
   in if n  (b ^ av)
 then bin b lo av
 else bin b av hi

Stefan's routine is, as expected, much much faster still: I tested the
first two routines on numbers with 5 million or so bits and they took
~20 seconds of CPU time, whereas I tested Stefan's routine with
numbers with 50 million bits, and it took ~11 seconds of CPU time. The
limitation of Stefan's routine is of course that it's limited to base
2 -- it is truly a bit-length routine -- and I guess another potential
limitation is that it uses GHC extensions, not pure Haskell (at least
I had to compile it with -fglasgow-exts). But it's the speed king if
those limitations aren't a problem!

Uwe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A question about monad laws

2008-02-12 Thread Uwe Hollerbach
On Feb 12, 2008 6:12 AM, Jan-Willem Maessen [EMAIL PROTECTED] wrote:


 On Feb 12, 2008, at 1:50 AM, David Benbennick wrote:

  On Feb 11, 2008 10:18 PM, Uwe Hollerbach [EMAIL PROTECTED]
  wrote:
  If I fire up ghci, import
  Data.Ratio and GHC.Real, and then ask about the type of infinity,
  it
  tells me Rational, which as far as I can tell is Ratio Integer...?
 
  Yes, Rational is Ratio Integer.  It might not be a good idea to import
  GHC.Real, since it doesn't seem to be documented at
  http://www.haskell.org/ghc/docs/latest/html/libraries/.  If you just
  import Data.Ratio, and define
 
  pinf :: Integer
  pinf = 1 % 0
 
  ninf :: Integer
  ninf = (-1) % 0
 
  Then things fail the way you expect (basically, Data.Ratio isn't
  written to support infinity).  But it's really odd the way the
  infinity from GHC.Real works.  Anyone have an explanation?

 An educated guess here: the value in GHC.Real is designed to permit
 fromRational to yield the appropriate high-precision floating value
 for infinity (exploiting IEEE arithmetic in a simple, easily-
 understood way).  If I'm right, it probably wasn't intended to be used
 as a Rational at all, nor to be exploited by user code.

 -Jan-Willem Maessen


Well... I dunno. Looking at the source to GHC.Real, I see

infinity, notANumber :: Rationalinfinity   = 1 :% 0notANumber = 0 :% 0

This is actually the reason I imported GHC.Real, because just plain %
normalizes the rational number it creates, and that barfs very quickly when
the denominator is 0. But the values themselves look perfectly reasonable...
no?

Uwe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A question about monad laws

2008-02-11 Thread Uwe Hollerbach
Ratio Integer may possibly have the same trouble, or maybe something
related. I was messing around with various operators on Rationals and
found that positive and negative infinity don't compare right. Here's
a small program which shows this; if I'm doing something wrong, I'd
most appreciate it being pointed out to me. If I fire up ghci, import
Data.Ratio and GHC.Real, and then ask about the type of infinity, it
tells me Rational, which as far as I can tell is Ratio Integer...? So
far I have only found these wrong results when I compare the two
infinities.

Uwe

 module Main where
 import Prelude
 import Data.Ratio
 import GHC.Real

 pinf = infinity
 ninf = -infinity
 zero = 0

 main =
   do putStrLn (pinf =  ++ (show pinf))
  putStrLn (ninf =  ++ (show ninf))
  putStrLn (zero =  ++ (show zero))
  putStrLn (min pinf zero =\t ++ (show (min pinf zero)))
  putStrLn (min ninf zero =\t ++ (show (min ninf zero)))
  putStrLn (min ninf pinf =\t ++ (show (min ninf pinf)))
  putStrLn (min pinf ninf =\t ++ (show (min pinf ninf)) ++ \twrong)
  putStrLn (max pinf zero =\t ++ (show (max pinf zero)))
  putStrLn (max ninf zero =\t ++ (show (max ninf zero)))
  putStrLn (max ninf pinf =\t ++ (show (max ninf pinf)))
  putStrLn (max pinf ninf =\t ++ (show (max pinf ninf)) ++ \twrong)
  putStrLn (() pinf zero =\t ++ (show (() pinf zero)))
  putStrLn (() ninf zero =\t ++ (show (() ninf zero)))
  putStrLn (() ninf pinf =\t ++ (show (() ninf pinf)) ++ \twrong)
  putStrLn (() pinf ninf =\t ++ (show (() pinf ninf)))
  putStrLn (() pinf zero =\t ++ (show (() pinf zero)))
  putStrLn (() ninf zero =\t ++ (show (() ninf zero)))
  putStrLn (() ninf pinf =\t ++ (show (() ninf pinf)))
  putStrLn (() pinf ninf =\t ++ (show (() pinf ninf)) ++ \twrong)
  putStrLn ((=) pinf zero =\t ++ (show ((=) pinf zero)))
  putStrLn ((=) ninf zero =\t ++ (show ((=) ninf zero)))
  putStrLn ((=) ninf pinf =\t ++ (show ((=) ninf pinf)))
  putStrLn ((=) pinf ninf =\t ++ (show ((=) pinf ninf)) ++ \twrong)
  putStrLn ((=) pinf zero =\t ++ (show ((=) pinf zero)))
  putStrLn ((=) ninf zero =\t ++ (show ((=) ninf zero)))
  putStrLn ((=) ninf pinf =\t ++ (show ((=) ninf pinf)))
  putStrLn ((=) pinf ninf =\t ++ (show ((=) pinf ninf)) ++ \twrong)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] fast integer base-2 log function?

2008-02-11 Thread Uwe Hollerbach
Thanks, guys! It looks at first glance as if the code Thorkil posted is
similar to mine (grow comparison number in steps of 2 in the exponent, then
binary-search to get the exact exponent), while Stefan's version is more
similar to the walk-the-list idea I had in mind. I'll play with both of
these when I get a chance...

Uwe

On Feb 10, 2008 10:44 PM, Thorkil Naur [EMAIL PROTECTED] wrote:

 Hello,

 If the standard libraries provide such a function, I haven't found it. I
 must
 admit that I haven't studied your code in detail. I usually do as follows
 for
 integer logarithms, shamelessly stolen from the Haskell report:

-- Integer log base (c.f. Haskell report 14.4):
 
imLog :: Integer-Integer-Integer
imLog b x
  = if x  b then
  0
else
  let
l = 2 * imLog (b*b) x
doDiv x l = if x  b then l else doDiv (x`div`b) (l+1)
  in
doDiv (x`div`(b^l)) l

 Best regards
 Thorkil


 On Monday 11 February 2008 07:15, Uwe Hollerbach wrote:
  Hello, haskellers,
 
  Is there a fast integer base-2 log function anywhere in the standard
  libraries? I wandered through the index, but didn't find anything that
  looked right. I need something that's more robust than logBase, it
  needs to handle numbers with a few to many thousands of digits. I
  found a thread from a couple of years ago that suggested there was no
  such routine, and that simply doing length (show n) might be the
  best. That seems kind of... less than elegant. I've come up with a
  routine, shown below, that seems reasonably fast (a few seconds of CPU
  time for a million-bit number, likely adequate for my purposes), but
  it seems that something with privileged access to the innards of an
  Integer ought to be even much faster -- it's just a simple walk along
  a list (array?) after all. Any pointers? Thanks!
 
  Uwe
 
   powi :: Integer - Integer - Integer
   powi b e | e == 0= 1
| e  0 = error negative exponent in powi
| even e= powi (b*b) (e `quot` 2)
| otherwise = b * (powi b (e - 1))
 
   ilog2 :: Integer - Integer
   ilog2 n | n  0  = ilog2 (- n)
   | n  2  = 1
   | otherwise  = up n (1 :: Integer)
 where up n a = if n  (powi 2 a)
   then bin (quot a 2) a
   else up n (2*a)
   bin lo hi = if (hi - lo) = 1
  then hi
  else let av = quot (lo + hi) 2
   in if n  (powi 2 av)
 then bin lo av
 else bin av hi
 
  (This was all properly aligned when I cut'n'pasted; proportional fonts
  might be messing it up here.)
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] fast integer base-2 log function?

2008-02-10 Thread Uwe Hollerbach
Hello, haskellers,

Is there a fast integer base-2 log function anywhere in the standard
libraries? I wandered through the index, but didn't find anything that
looked right. I need something that's more robust than logBase, it
needs to handle numbers with a few to many thousands of digits. I
found a thread from a couple of years ago that suggested there was no
such routine, and that simply doing length (show n) might be the
best. That seems kind of... less than elegant. I've come up with a
routine, shown below, that seems reasonably fast (a few seconds of CPU
time for a million-bit number, likely adequate for my purposes), but
it seems that something with privileged access to the innards of an
Integer ought to be even much faster -- it's just a simple walk along
a list (array?) after all. Any pointers? Thanks!

Uwe

 powi :: Integer - Integer - Integer
 powi b e | e == 0= 1
  | e  0 = error negative exponent in powi
  | even e= powi (b*b) (e `quot` 2)
  | otherwise = b * (powi b (e - 1))

 ilog2 :: Integer - Integer
 ilog2 n | n  0  = ilog2 (- n)
 | n  2  = 1
 | otherwise  = up n (1 :: Integer)
   where up n a = if n  (powi 2 a)
 then bin (quot a 2) a
 else up n (2*a)
 bin lo hi = if (hi - lo) = 1
then hi
else let av = quot (lo + hi) 2
 in if n  (powi 2 av)
   then bin lo av
   else bin av hi

(This was all properly aligned when I cut'n'pasted; proportional fonts
might be messing it up here.)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] background question about IO monad

2008-02-07 Thread Uwe Hollerbach
Thanks, I'm going to have to study this a bit...

Uwe

On 2/7/08, Ryan Ingram [EMAIL PROTECTED] wrote:
 On 2/6/08, Uwe Hollerbach [EMAIL PROTECTED] wrote:
  And, coming back to my scheme interpreter, this is at least somewhat
  irrelevant, because, since I am in a REPL of my own devising, I'm
  firmly in IO-monad-land, now and forever.

 This is not entirely true; a REPL can be pure.

 Consider the following simple stack-based-calculator; all the IO
 happens within interact, the REPL itself is pure:

 import System.IO

 main = hSetBuffering stdout NoBuffering  interact replMain

 replMain s = Stack calculator\n  ++ repl [] s

 repl :: [Int] - String - String
 repl _ [] = 
 repl _ ('q':_) = 
 repl s ('\n':xs) = show s ++ \n  ++ repl s xs
 repl s xs@(x:_) | x = '0'  x = '9' =
 let (v, xs') = head $ reads xs in repl (v:s) xs'
 repl s (c:xs) | c `elem` validCommands = case command c s of
 Just s' - repl s' xs
 Nothing - stack underflow\n ++ repl s xs
 repl s (_:xs) = repl s xs -- ignore unrecognized characters

 validCommands = .d+c
 command :: Char - [Int] - Maybe [Int]
 command '.' (x:xs) = Just xs
 command 'd' (x:xs) = Just $ x:x:xs
 command '+' (x:y:xs) = Just $ (x+y):xs
 command 'c' _ = Just []
 command _ _ = Nothing

 You can go further than interact if you want to abstract away the
 impurity in your system and take input from some outside process which
 has a limited set of impure operations.  Take a look here for an
 example using Prompt (which has seen some discussion here on
 haskell-cafe): http://paste.lisp.org/display/53766

 In that example, guess n is an action in the pure Prompt monad;
 different interpretation functions allow this monad to interact with
 an AI (in a semi-pure setting; it outputs strings), or with a real
 player via the full IO interface.  A similar mechanism could be used
 for the scheme REPL to make it as pure as possible, with
 getClockTime being replaced by prompt GetClockTime to interact
 with the outside world.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] background question about IO monad

2008-02-06 Thread Uwe Hollerbach
Hi, all, thanks for the responses. I understand the distinction
between pure functions and impure functions/procedures/IO actions, it
just felt to me in the samples that I quoted that I was in fact
starting from basically the starting point, eventually getting to the
same endpoint (or at least a pair of endpoints that are not easily
distinguished from each other by looking just at code), and inbetween
one path was going through liftIO and the other not. But I guess it
comes down to the fact that, since I'm in a REPL, I'm wallowing in
impurity all the time (or something like that :-) )

regards,
Uwe

On 2/6/08, Bulat Ziganshin [EMAIL PROTECTED] wrote:
 Hello Uwe,

 Wednesday, February 6, 2008, 7:44:27 AM, you wrote:

  But after that, it sure seems to me as if I've taken data out of the
  IO monad...

 this means that you can't use results of IO actions in pure functions.
 your code works in some transformed version of IO monad, so you don't
 escaped it

 if we call pure functions as functions and non-pure ones as
 procedures, the rule is functions can't call procedures, but all
 other activity is possible. in your do_action you calls procedure (to
 format current time) and call a function (to format given time).
 do_action is procedure (because it works in transformed IO monad), so
 you don't break any rules


 --
 Best regards,
  Bulatmailto:[EMAIL PROTECTED]


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] background question about IO monad

2008-02-06 Thread Uwe Hollerbach
Well, you may well be right! Just think of me as a planarian at the opera... :-)

On 2/6/08, Jonathan Cast [EMAIL PROTECTED] wrote:
 On 6 Feb 2008, at 7:30 PM, Uwe Hollerbach wrote:

  Hi, all, thanks for the responses. I understand the distinction
  between pure functions and impure functions/procedures/IO actions,

 Um, I'm not sure of that, given what you go on to say.

  it
  just felt to me in the samples that I quoted that I was in fact
  starting from basically the starting point,

 Not really.  The key difference to understand is that between
 difference between getCurrentTime and toUTCTime 42.  These are your
 `starting points' --- the rest is just pure code.

  eventually getting to the
  same endpoint (or at least a pair of endpoints that are not easily
  distinguished from each other by looking just at code), and inbetween

 Well, the in-between paths are pure (or can be) either way.  It's the
 starting point that needs liftIO or not.

  one path was going through liftIO and the other not. But I guess it
  comes down to the fact that, since I'm in a REPL, I'm wallowing in
  impurity all the time (or something like that :-) )

 jcc


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] background question about IO monad

2008-02-06 Thread Uwe Hollerbach
All right, after a bit of dinner and some time to mess about, here's
another attempt to check my understanding: here is a simplified
version of the lisp-time example:

 module Main where
 import System.Time

 pure_fn :: Integer - String
 pure_fn n = calendarTimeToString (toUTCTime (TOD n 0))

 wicked_fn :: IO String
 wicked_fn = getClockTime = return . pure_fn . toI
   where toI (TOD n _) = n

 make_wicked :: String - IO String
 make_wicked str = return str

 -- use of pure_fn
 -- main = putStrLn (pure_fn 123000)

 -- use of wicked_fn
 -- main = wicked_fn = putStrLn

 -- use of make_wicked
 main = (make_wicked (pure_fn 1234567890)) = putStrLn

If I use the first of the three main alternatives, I'm calling a
pure function directly: it takes an integer, 123..., and produces a
string. If I pass the same integer to the pure function, I'll get the
same value, every time. This string is passed to putStrLn, an IO
action, in order that I may gaze upon it, but the string itself is not
thereby stuck in the IO monad.

If I use the second of the three main alternatives, I'm calling an
IO action: wicked_fn, which returns the current time formatted as UTC.
In principle, every time I call wicked_fn, I could get a different
answer. Because it's an IO action, I can't just pass it to putStrLn in
the same way I passed in the previous pure_fn value, but instead I
have to use the bind operator =.

If I use the third of the main alternatives, I am starting with a
pure function: it's that number formatted as UTC (it happens to come
to Fri Feb 13 of next year), but then I pass it through the
make_wicked function, which transmogrifies it into the IO monad.
Therefore, as in the above, I have to use = in order to get it to
work; putStrLn (make_wicked (pure_fn 123...)) doesn't work.

deep breath

OK, after all that, my original question, in terms of this example:
the IO monad is one-way is equivalent to saying there is no haskell
function that I could write that would take

 (make_wicked (pure_fn 123456))

and make it into something that could be used in the same way and the
same places as just plain

 (pure_fn 123456)

?

And, coming back to my scheme interpreter, this is at least somewhat
irrelevant, because, since I am in a REPL of my own devising, I'm
firmly in IO-monad-land, now and forever.

Right?

thanks, Uwe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] background question about IO monad

2008-02-05 Thread Uwe Hollerbach
Hello, haskellers, I have a question for you about the IO monad. On
one level, I seem to be getting it, at least I seem to be writing
code that does what I want, but on another level I am almost certainly
not at all clear on some concepts. In various tutorials around the
web, I keep finding this notion that the IO monad is one-way, that
you can put stuff into it, but you can't take it out. (And, damn, some
of the contortions I went through while trying to figure out how to do
exceptions in my little scheme interpreter certainly bear that out! I
was sure beating my head against liftIO et al for a fair while...) But
let me post some code snippets here:

 lispUTCTime [] = doIOAction (getClockTime) toS allErrs
   where toS val = String (calendarTimeToString (toUTCTime val))

 lispUTCTime [IntNumber n] =
   return (String (calendarTimeToString (toUTCTime (TOD n 0

This is a little function I added to the interpreter a couple of days
ago: if you enter (UTCtime) with no arguments, it gets the current
time and formats it as UTC: like so; this came from the first
alternative above:

  lisp (UTCtime)
  Wed Feb  6 03:57:45 UTC 2008

and if you give an argument, you get that interpreted as a number of
seconds since epoch, and that gets formatted as UTC; this is from the
second alternative above:

  lisp (UTCtime 1.203e9)
  Thu Feb 14 14:40:00 UTC 2008

And here's the doIOAction routine: I wrote this, it's not some
system-level routine.

 doIOAction action ctor epred =
   do ret - liftIO (try action)
  case ret of
   Left err - if epred err
  then throwError (Default (show err))
  else return (Bool False)
   Right val - return (ctor val)

OK, with all that as background, on one level I understand why I need
the doIOAction routine in the first version of lispUTCTime: I'm
calling getClockTime, that's an IO action, so I enter the IO monad,
get the time, and return: all is cool. In the second version, all I'm
doing is taking a number and interpreting it as a time, and writing
that in a particular format; again, no problem.

But after that, it sure seems to me as if I've taken data out of the
IO monad... haven't I? Given that the second alternative never entered
doIOAction and that after both are done I have a string of characters,
prettily formatted to indicate a time, that's what it feels like to
this unwashed C programmer.

So, what's going on there? What's the difference between the two
alternatives? I would appreciate any enlightenment you can send my
way!

regards,
Uwe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] more haskeem: it's just scheme in haskell now, no more almost

2008-02-03 Thread Uwe Hollerbach
Hello, haskellers, my mud pies are getting bigger: I have added an
exception-handling mechanism to haskeem (ie, for handling exceptions
at the scheme level, not the haskell level; that was already working),
and also hooked up the REPL to gnu readline: the example code for the
readline command was practically tailor-made for me! :-)

Also, I must apologize: I screwed up the URL earlier: I had said
http://www.korgwal.com/software/haskeem/ and then promptly went and
stuck it into http://www.korgwal.com/haskeem/ ... you may attribute
that to rapidly-advancing senility, if you like, I won't contradict
you. Anyway, I've fixed it by simply putting copies into both places.

regards,
Uwe Hollerbach
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] haskeem -- (almost) scheme in haskell

2008-01-22 Thread Uwe Hollerbach
Hello, haskellers, I am a total newbie in haskell, but I've been
making some mud pies that are coming out kinda pleasingly to my
admittedly biased eye. I found Jonathan Tang's excellent tutorial
Write yourself a scheme in 48 hours a few weeks ago, went through
and beyond that, and now have something that's starting to look
moderately like a real scheme interpreter. I've got much, though not
all, of the scheme number tower implemented, I have most of the core
syntactic forms (and, apply, begin, case cond, define, if, lambda,
let, let*, letrec, letrec*, or, quote, set!) and am working on a few
more (delay and force are well along, guard is also in progress but
less so), and I have implemented a small trace facility. It's still a
toy, but it's starting to be a toy with some nice dance moves. I think
it could do the majority of the code in SICP by now. If you'd like to
have a look, please surf over to
http://www.korgwal.com/software/haskeem/index.html. Feedback welcome!

best regards,
Uwe Hollerbach
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe