Re: Haskell for non-Haskell's sake

2003-09-01 Thread D. Tweed
On Mon, 1 Sep 2003, Joost Visser wrote:

 Hi Hal and others,
 
 We would like to hear your thoughts on the viability of a conference or 
 workshop dedicated to applications of Haskell for non-Haskell purposes.
 
 On Saturday 30 August 2003 01:39, Hal Daume III wrote:
  I'm attempting to get a sense of the topology of the Haskell
  community.  Based on the Haskell Communities  Activities reports, it
  seems that the large majority of people use Haskell for Haskell's sake.
 
 This bias seems to exist not only in the Communities  Activities reports, but 
 also in the Haskell mailing lists and in the Haskell-related events, such as 
 the Haskell Workshop. One could easily deduce that Haskell is still very much 
 an academic language, of interest to language _designers_ more than to 
 language _users_. However, the reactions to your inquiry about use of Haskell 
 for non-Haskell purposes suggests that a significant group of language 
 _users_ does actually exist, though their voice is not heard too often.

Personal viewpoint:

I think I see a reasonable number of people asking questions about how to
acheive what they need to in Haskell, which whilst it isn't often
explicitly stated, often appears to be because they've got a task that
they aren't `doing in Haskell for Haskell's sake'. When making your
contribution is spending 10 minutes writing an e-mail (such as this one)
there's no problem making your voice heard, and it's nice think you're
being an active member of a very nice and helpful community.

When it's writing a paper for a conference, which requires weeks of
concerted effort, requires that you ( the reviewers :-) ) beleive you've
done something worth telling other people about, finding funding to attend
the conference (which may be funding which could be used to attend a
conference in an area where you are a specialist), and when your peers in
your `proper subject area' won't be interested in the result of all this
work, it seems natural (though not of course desirable) that most
`pure users' of Haskell don't have enough desire to do the work to make
their voice heard that way.

To put it in context, I wouldn't expect virtually anyone on the list
who work in some area (e.g., Hal appears to work in Natural Language
Processing) to have attended a conference for the language they did a
particular piece of software in, when that language was Java, C++,
Perl, Python (and I know there are conferences for those languages).

This isn't to put anyone off the idea of a Haskell applications
conference as such, I just think that before beginning there should be a
convincing rebuttal of the points above. There may well be one; it's VERY
possible I'm wrong/atypical.

___cheers,_dave_
www.cs.bris.ac.uk/~tweed/  |  `It's no good going home to practise
email:[EMAIL PROTECTED]  |   a Special Outdoor Song which Has To Be
work tel:(0117) 954-5250   |   Sung In The Snow' -- Winnie the Pooh

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: Haskell for non-Haskell's sake

2003-08-30 Thread D. Tweed
On Sat, 30 Aug 2003, Alastair Reid wrote:

 
  If you use Haskell for a purpose *other than* one of those listed below,
  I'd love to hear.  I don't need a long report, anything from a simple I
  do to a paragraph would be fine, and if you want to remain anonymous
  that's fine, too.
[snip]
 - FVision (Visual tracking)
 
   Given a bunch of simple image tracking primitives (written in C++ and
   assembly, part of the larger XVision system), build complex feedback
   loops, hierarchies, etc. to create more robust, flexible, sophisticated
   tracking systems.
   http://www.reid-consulting-uk.ltd.uk/alastair/publications/padl01/index.html
 
   Uses Haskell's ability to 'embed' domiain specific languages inside it.
 
   [One could argue that this project was just  'Haskell for Haskell's sake'
but it's worth pointing out that it lead to a complete redesign of XVision
along the lines I had developed in the Haskell version.]

I do research in computer vision/image processing and I've also used
Haskell quite a lot for doing prototyping of algorithms. I'm doing sort of
the opposite thing to Alastair: he's taking established low-level
image analysis techniques (written in C/C++) and combining them in
more effective ways using Haskell as a language for doing higher
level processing. (Apologies if this is an incorrect
understanding.) I work on more effective low-level image processing
algorithms with a higher-level stuff that's simple and stable enough that
coding it in C++ doesn't cause a problem. I do extensive prototyping using
simple Haskell implementations of ideas; once I'm reasonably happy that
the idea has a chance of working I then convert it to C++. I have to
convert to C++ for `real work' because (a) Haskell is too slow for most
of the low-level stuff, particularly `semi real-time' image processing
and (b) no-one else here knows Haskell so if I want to be able to share
code on common projects I need either C or C++. I want eventually to be
able to plug in Haskell code prototypes into the overall C++ structure to
be able to do more testing before moving to C++, but that awaits me having
enough free time to study the Haskell FFI, etc...

I'm very impressed with the FVision stuff and I've contrasted what I do
with the it just to show Haskell is being used for BOTH high and low-level
areas.

I also use Haskell for some `scripting-stuff level tasks' like
autogenerating makefiles and processing log files. I write both Perl and
Python code where they seems best, so I can reasonably say that in those
cases where I use Haskell it's because I think it's easier for me than
those languages.

___cheers,_dave_
www.cs.bris.ac.uk/~tweed/  |  `It's no good going home to practise
email:[EMAIL PROTECTED]  |   a Special Outdoor Song which Has To Be
work tel:(0117) 954-5250   |   Sung In The Snow' -- Winnie the Pooh

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell


Re: idiom for producing comma-seperated lists?

2003-08-08 Thread D. Tweed
On Fri, 8 Aug 2003, Antony Courtney wrote:

 I often need to format a list of strings using some character as a 
 *seperator* rather than a terminator for the items.  Is there some 
 simple combinator or idiom from the Prelude or standard libraries that 
 could be used for this purpose?

I think the primary intended use for `intersperse' from the List library
is to be used as in

import List

mkSepStr = concat . intersperse , 

HTH,

___cheers,_dave_
www.cs.bris.ac.uk/~tweed/  |  `It's no good going home to practise
email:[EMAIL PROTECTED]  |   a Special Outdoor Song which Has To Be
work tel:(0117) 954-5250   |   Sung In The Snow' -- Winnie the Pooh

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Debugging haskell

2003-02-24 Thread D. Tweed
On Mon, 24 Feb 2003, Malcolm Wallace wrote:

 Joe English [EMAIL PROTECTED] writes:
 
  Me either; in fact even 1/4 of the time debugging
  sounds quite high.
  
  When I first started using Haskell, most of my time
  went to fighting with the typechecker, but once the
  code checked it almost always Just Worked.  This is a
  pleasant experience.
 
 But surely fighting the typechecker is exactly a debugging activity.
 The great benefits are that you remove the bugs before even running
 the code, and you are finding specification errors in addition to
 mere implementation errors.

[standard caveat: I write research code for image processing problems,
which may well be very different to, e.g., code implementing a COM
wrapper.]

Mentally I classify bugs into two kinds: `source code bugs' and
`algorithm bugs'. Source code bugs are when the program you write isn't
doing the same thing as your mental model of your algorithm, algorithm
bugs are when your code is performing the same as your mental
algorithm. You can't really argue that `algorithm bugs' aren't bugs
because

(i) it's easy to have a specification which is precise without
being either obviously acheivable and having no obvious algorithm for
performing it; sometimes the only way to make progress on the program is
to try and iteratively develop an algorithm/implementation to solve it.

(ii) more importantly, when something doesn't work you don't get a clear
indicator of which kind of bug it is, and you often have to engage in
`traditional debugging' to finally see that the code is doing what you
think it is and that your errors are due to an algorithm bug.

If you discard `compliation preventing, very very quick to solve' bugs
(e.g., missing semi-colons in C++, silly typecheck errors in Haskell) I
find that the ratio between source code bugs and algorithm bugs is maybe
1:5. This means that whilst I find Haskell a great deal easier to write
correctly than C++, there's not that much difference between debugging
times for Haskell and C++ because the algorithm level bugs dominate. (In
fact, it's in some ways easier to do algorithm debugging in C++ because
it's very easy to temporarily stash things in global variables so you can
have it available for `debugging processing' in parts of the program where
it isn't available under the normal argument passing. Of course you then
have to clean the mess up...)

And I'd certainly say 3/4 time spent debugging is probably an
underestimate for me (given the def of debugging above).

___cheers,_dave_
www.cs.bris.ac.uk/~tweed/  |  `It's no good going home to practise
email:[EMAIL PROTECTED]  |   a Special Outdoor Song which Has To Be
work tel:(0117) 954-5250   |   Sung In The Snow' -- Winnie the Pooh

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: avoiding cost of (++)

2003-01-16 Thread D. Tweed
On Thu, 16 Jan 2003, Iavor S. Diatchki wrote:

 hi,

 just for fun i wrote the function in a different way.  it should perform
 pretty much the same way as your function.  i don't think the problem is
 (++) here, it is just the way this function is.  if f is going to use
 all of its argument, it doesn't matter that you concatenated the two
 lists, as you will be walking over the list anyways.  if it is going to

The other obvious thing to ask is, given you say f doesn't care about the
order of its arguments, is whether you can write a version of f (f', say)
which `outputs its intermediate internal values' and another function
combineFStates which takes in two `internal values' and takes them to a
complete solution. Then at its very simplest (and very untested)

buildPartResults s f' [] = []
buildPartResults s f' (x:xs) = let e=f' s x
   in e:buildPartResults e f' xs

mapWithout combineFStates f' xs = zipWith combineFStates xs' (tail xs'')
 where xs'=buildPartResults f' xs
   xs''=buildPartResults f (reverse xs)

AFAICS this reuses the partial evaluations of f which, as Iavor suggests,
are likely to be very significant if the lists are long enough that the ++
shows up as costly. Obviously this won't help if f _isn't_ something where
internal states can be updated incrementally or the combining step isn't
easy.

Just a vague thought,

___cheers,_dave_
www.cs.bris.ac.uk/~tweed/  |  `It's no good going home to practise
email:[EMAIL PROTECTED]  |   a Special Outdoor Song which Has To Be
work tel:(0117) 954-5250   |   Sung In The Snow' -- Winnie the Pooh

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: avoiding cost of (++)

2003-01-16 Thread D. Tweed
On Thu, 16 Jan 2003, Pal-Kristian Engstad wrote:

 It struck me though, if you have a function that calculates something on a
 list 'lst', and then you calculate something on 'lst ++ [a]', then surely one
 should be able to cache the results from the previous calculation.

I'm not a Haskell expert, but the code idea I posted (which was missing a
couple of arguments representing the initial `internal value's) was based
on a variant of this idea, namely move forward through the list producing
`internal values' (I'm trying to avoid using the phrase `internal state'
:-) )  for all prefixes of the list, then do the same for all the suffixes
of the list and combine the state for the prefix ending just before the
omitted item and the suffix beginning just after the omitted item to get a
full result. AFAICS this is O(n), whereas just doing prefixes this way
appears to still be quadratic because of the repeated evaluations of the
tail of the list.

Obviously being able to do this places some restrictions on what f can be
though.

___cheers,_dave_
www.cs.bris.ac.uk/~tweed/  |  `It's no good going home to practise
email:[EMAIL PROTECTED]  |   a Special Outdoor Song which Has To Be
work tel:(0117) 954-5250   |   Sung In The Snow' -- Winnie the Pooh

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: AW: AW: Editor Tab Expansion

2002-12-07 Thread D. Tweed
On Fri, 6 Dec 2002, Ingo Wechsung wrote:

 Beg your pardon, Marcin
 
 But they are compatible because there is one most universally accepted
 interpretation of a tab (move to the next multiple of 8 columns). Any
 other interpretation hampers portability and should be avoided.
 
 No. It didn't hamper portability with C, Java, Perl or any other *nix stuff
 since more than 30 years except with COBOL, Python (?) and Haskell, where
 COBOL is excused due to its origin in the punch card era. The others are not
 excused, since they are yet to be established newcomers. The wise new
 language designer should obey the unix rule whitespace does not matter
 instead to expect that the world change their .exrc (by the way, according
 to ls -l my .exrc is several years older than Haskell) to accomodate.

Likewise, the initial-uppercase = typename/constructor, initial lowercase
= function/argument name has absolutely got to go because -- and I'm
afraid I haven't been able to track down a definitive reference -- (AFAIK
all versions of) the really old language Pascal has the rule that you
don't distinguish case at all in identifiers (see e.g,
http://www.marcocantu.com/epascal/English/ch02code.htm), and loads of
people used Pascal even up to my undergrad degree in 1993. Likewise, all
the work that's been done/going to be done to allow Haskell programs to be
written using full unicode characters (including decisions about how
`wide' a character is so that the layout rule can still be applied) are
clearly wrong because there's the unix rule that characters are `8 bit
ASCII codes'. And the absolute killer is the strong module interface which
allows the export of only certain entities from a module, when everyone
knows the C law that if you can find a prototype for a function (even if
it's just retyped in the calling .c file, not coming from the module at
all) then you access it regardless of the module writers desires.

In every case where you've got a new idea which conflicts with practices
from the past, you've got to decide whether the benefit derived from using
the new idea outweighs the difficulties caused. Virtually all of the
people who actually use Haskell would say (I believe :-) ) that they feel
the ease of reading and avoidance of the occasions where `explicit
delimiters' say one thing but the programmer indentation says another are
good enough benefits to make this the `primary mode' of talking about and
using Haskell. (Remember if you dislike this you have the ability to use
the explicit delimiter syntax). You seem to be saying that layout should
be banished because there's a set of people (of undetermined size) who
would like Haskell were they to encounter it, except for that darned
optional layout rule interefering with the ability to choose a personal
interpretation for what the ASCII tab character should mean.

[snip]
 But even if it were so and you could expand a tab to 8 columns only, there
 is still the tools problem. No meaningful diff -b on Haskell source.

What I really want is the -renameAlphaNumStrings option to diff which
renames (in a consistent, cross file way) all alphanumeric strings to
canonical values in some way and then outputs lines which only differ
after renaming. After all, if someone has simply renamed the variables in
a local function the semantics of the function hasn't changed so you don't
need to worry about it for diff purposes. I mean, a process like that
has got to be even better than one which takes advantage of the fact that
in some languages various combinations of ASCII characters do not affect
the program represented by a given file.

The point I'm trying to make here and above is that you're used to working
in a particular way using rules that apply to what you've worked with
before -- and there are facilities to enable you to work that way in
Haskell for people who want to -- but that you seem to object to there
even exists a different way that is used and really liked by a most of the
people who work with Haskell. I at least think the layout rule is very
valuable for the two reasons I mentioned above, but I don't clamour about
the `problem of the explicit syntax for Haskell that means you can write
programs where the explicit syntax says one thing to the compiler and the
layout suggests something else to the human reader'; let those who prefer
that way do it without any hectoring from me.

[snip]

___cheers,_dave_
www.cs.bris.ac.uk/~tweed/  |  `It's no good going home to practise
email:[EMAIL PROTECTED]  |   a Special Outdoor Song which Has To Be
work tel:(0117) 954-5250   |   Sung In The Snow' -- Winnie the Pooh

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Editor Tab Expansion

2002-12-06 Thread D. Tweed

 Wether spaces or tabs are better in source files is a matter of taste and
 a language should not force me to use one or another.

Well note that it doesn't only confuse compilers: if you post code for
other people to read (whose display software has their personal own
interpretation of what a tab character means) it may be confusingly
formatted (for C like languages) or downright meaning changed (Haskell,
python). I know I certainly hate reading/patching source code in any
language when the original writer used tab characters because you have to
play the `figure out what interpretation of tab produces sensibly
laid-code' game and then temporarily reset your editor to that. (You can
probably tell that I much prefer the mechanism where the tab key is a
configurable input mechanism but the representation in the file is an
unambiguous one using spaces) . I think it's really the fault of ASCII for
allowing the tab character code which has user-defined semantics, just
like the other abomination the form-feed character.

___cheers,_dave_
www.cs.bris.ac.uk/~tweed/  |  `It's no good going home to practise
email:[EMAIL PROTECTED]  |   a Special Outdoor Song which Has To Be
work tel:(0117) 954-5250   |   Sung In The Snow' -- Winnie the Pooh

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Q: Forcing repeated evaluation

2002-09-18 Thread D. Tweed

On 17 Sep 2002, Jan Kybic wrote:

collection. I want to try to force l to be generated on-the-fly
every time it is needed, to see if it improves performance.
What is a good way to do it? Would something like
   
  ...
   The easiest way is to make it a function
  
   l _ = [ i*i*i | i - [0..n] ]   -- for very large n
 
  
  I asked a similar question a while ago, and (I think) there was general
  agreement that this was not a reliable solution because the expression

Note that (assuming that I'm not missing something) you can prevent the
moving of expressions involving l in a very ugly way by noting that these
`dummy argument functions' are polymorphic so that you could write

x1 = f1 (l 1)
x2 = f2 x1 (l 2)
x3 = f3 x2 (l 3)

(ie using a different argument for each one) since I'm pretty sure none of
the Haskell compilers attempt to replace an lhs by an rhs before
attempting lambda lifting. The nasty thing about this is of course that
the onus is now on you to ensure you don't repeat an argument; you don't
get any help from the type system.


___cheers,_dave_
www.cs.bris.ac.uk/~tweed/  |  `It's no good going home to practise
email:[EMAIL PROTECTED]  |   a Special Outdoor Song which Has To Be
work tel:(0117) 954-5250   |   Sung In The Snow' -- Winnie the Pooh



___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Spam

2002-08-30 Thread D. Tweed

On Fri, 30 Aug 2002, Koen Claessen wrote:

 * Every once in a while, we get messages like your e-mail
   is under consideration for sending to the list. This
   suggests that the mailing list is moderated, and that
   there is some person deciding on what can and what cannot
   be sent to the list.

I can only recall seeing these on the hugs specific lists (hugs-users,
hugs-bugs), not the more general Haskell lists and they seem to be
auto-triggered by e-mail size. It wouldn't surprise me to learn they're
administered differently from haskell and haskell-cafe and just get
relayed through haskell.org. However I fully agree with you that spam is a
real problem (I estimate at least 50% of the spam I get comes from the
Haskell/hugs lists).

___cheers,_dave_
www.cs.bris.ac.uk/~tweed/  |  `It's no good going home to practise
email:[EMAIL PROTECTED]  |   a Special Outdoor Song which Has To Be
work tel:(0117) 954-5250   |   Sung In The Snow' -- Winnie the Pooh


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: can a lazy language give fast code?

2002-07-31 Thread D. Tweed

On Wed, 31 Jul 2002, Andrew J Bromage wrote:

 Let me clarify what I meant by that and see if you still disagree.
 
 Realistically, _most_ new software installations today (I deliberately
 ignore legacy systems etc) are not overloaded, in that there are more
 computrons available than are required to perform the task required
 of them.  Of course this is partly because when you do a new
 installation, you add more than you need because you expect to grow.

I don't disagree at all with your conclusion that there are many factors
other than throughput that a programmer wants to know and trade-off when
choosing a language. It's in saying this is warranted by `almost all'
processes being bound by things other than throughput which may be true in
the average sense, but I don't think that all programmers have almost all
their programming tasks being dominated by something other than raw
throughput but rather there are sets of programmers who have all of the
tasks being dominated by the need something else (robustness, say) and
some who have all their tasks being dominated by the need for raw
throughput. To an extent, I'm being pedantic but I do think it's important
when re-thinking benchmarks to recognise that it's a diverse world of
programming out there and ideally we want programmers to be able to
perform comparisons between languages using the criteria that matter to
them (and some may validly value throughput) rather than to change from
measuring on only one variable (throughput) to measuring on a different
variable but still only one variable. 

 Secondly, most non-embedded CPUs in the world are not overloaded
 either.  Chances are for a given desktop machine, it spends most of
 its waking hours waiting for the next keystroke or mouse movement.
 Web developers in particular know this: For the typical case, your
 application server runs at the speed of the network.

This is a perfect example of where using an average is pretty misleading:
my desktop machine spends a maybe half of its time doing essentially
nothing since my thinking time as I write programs and papers is long
enough that the text editor, etc, spends most of its time waiting on
input. The other half the time it's running image processing code which is
essentially CPU bound, so it's running at close to 100% processor
utilisation. But (even one of the robust-statistics definitions of) the
average would say my machine is using about half the processor power at
any given time instant. Clearly this isn't what's happening, there's
actually two regimes of operation which it switches between.

You make very good points in what I've snipped below, again it's just
the use of `most' in a way that implies (to me) taking an average as
the representative of what everyone has to deal with that I `disagree
with'.

[snip]
  Of more
  concern to me is, when's the last time you actually got a well specified
  computational problem and a reasonable amount of time to write a carefully
  crafted program to solve it, (particularly when you had some reassurance
  that the very specification of what to solve wouldn't change after the
  first time you ran the code :-) )?
 
 Perhaps the ICFP contests are actually fairer as benchmarks than as
 competitions?

Interesting thought, particularly if the judges announced changes to what
the problem to be solved was half-way through :-)

___cheers,_dave_
www.cs.bris.ac.uk/~tweed/  |  `It's no good going home to practise
email:[EMAIL PROTECTED]  |   a Special Outdoor Song which Has To Be
work tel:(0117) 954-5250   |   Sung In The Snow' -- Winnie the Pooh

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Need help

2002-07-24 Thread D. Tweed

On 23 Jul 2002, Alastair Reid wrote:

 
  You shouldn't _need_ to be in the IO monad to get random numbers
  (although if you choose to that can be a good choice). Clearly
  there's the need to initialise the generator, but if you want
  `random' random numbers (as opposed to a known sequence of random
  numbers for debugging) getting the value of time via an
  unsafePerformIO is as good as anything else. From then on, the
  pseudo-random number generator will deterministically produce what
  are hopefully `acceptably random looking' numbers.
 
 Isn't this a very dangerous practice?  It's so very, very easy to
 break referential transparency when using unsafePerformIO with
 functions known to produce observably different results each time you
 call it.  And once you do this, all kinds of nice Haskell properties
 go to hell.

I would imagine it's an incredibly dangerous practice; I've only actually
done this for top level CAFs, and I'd imagine these are probably much less
likely to be duplicated by optimisation which is why it hasn't bitten me.
In general I do get the seed in a top level IO monad wrapper.

 Safer ways would be to use the monadic operators as intended to get
 random seeds and then use implicit parameters to pass them around
 (using a mild variation of John Hughes' approach to mutable
 variables).

That sounds a good way (implicit parameters are on my `learn about at
some point list'). All I was trying to say, in my rather confused email,
is that __one__ haskell idiom for dealing with random numbers is by
passing around StdGen's, in contrast with the C idiom of `calling a random
number generator function' and that if you pass in an initial StdGen you
don't need to be within the IO monad to do processing involving random
numbers.

It wasn't clear to me whether Vincenzo's e-mail was saying that you just
needed to be in IO to generate the seed or that you need to be in IO to do
anything that involves generating random numbers __after you've got the
seed__. Since I have to admit I really dislike having monads extend beyond
the top couple of levels of a program I wanted to point out that actually
generating and using random numbers can be done outside IO.

___cheers,_dave_
www.cs.bris.ac.uk/~tweed/  |  `It's no good going home to practise
email:[EMAIL PROTECTED]  |   a Special Outdoor Song which Has To Be
work tel:(0117) 954-5250   |   Sung In The Snow' -- Winnie the Pooh


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Need help

2002-07-23 Thread D. Tweed

On Tue, 23 Jul 2002, Nick Name wrote:

 It's relatively simple.
 
 The random number generator is a pure function, so it cannot be
 nondeterministic. So, you have a way to build this function with a seed,
 since the author wanted you to be able to do so, I could say for
 completeness, or reuse sake.
 
 But what you want is nondeterminism. How can you get that? With I/O of
 course. So, apart from the fact I am not going to search the getTime
 function, but it's probable haskell has one, and apart from the fact
 that getTime is also nondeterministic, here's a program that prints a
 true random number. Since this program uses I/O, it's a monadic action. 

You shouldn't _need_ to be in the IO monad to get random numbers (although
if you choose to that can be a good choice). Clearly there's the need to
initialise the generator, but if you want `random' random numbers (as
opposed to a known sequence of random numbers for debugging) getting the
value of time via an unsafePerformIO is as good as anything else. From
then on, the pseudo-random number generator will deterministically produce
what are hopefully `acceptably random looking' numbers.

As I made some (frankly rather confused) contribution to the debate that
led to the current formulation of Random.hs, I'll have a go at explaining
the intended usage.

Supposing you wanted a program that generated a list of random names from
a preset list, then you could have

f :: [Int] - [Name] - [Name]
f xs names = map (\x-names!!x) xs

where you supply a (lazily generated) infinite list of random integers in
xs. This is good because by changing the initialisation of the function
generating the lazy list, you can get reliably __the same random list__
for debugging purposes. The problem comes if you've got, e.g., a need to
generate two random lists: a good solution is clearly something along the
lines of

g::[Int]-[Name]-[Name]-([Name],[Name])
g xs names1 names2 = (f ys names1,f zs names2)
   where (ys,zs) = `two statistically independent random list generated
from xs'

The problem is to find a convenient way to split up the original list for
passing to sub functions where not only are the pieces independent as
complete lists, but also that no matter what pattern of further splitting
the subfunctions use all pairs below are statistically independent. The
idea used in the Randoms library is to have an abstract data-type StdGen
which you can apply `randoms' to to get an infinite list of random numbers
and which you can (deterministically) `split' into two new StdGen's which
should give statistically independent sequences. With these you should get

(i) the ability to write programs by passing around either generators or
infinite lists, so that you can set them to produce the same results
each time for debugging purposes.

(ii) no need for the IO monad to infect functions purely because the need
random numbers.

I don't know the answer to the original posters question; however
Randoms.hs contains

primitive getRandomSeed :: IO Integer

which I believe you can use either to get the seed within the IO monad
directly or via unsafePerformIO if you don't want the IO monad around.

HTH, 

___cheers,_dave_
www.cs.bris.ac.uk/~tweed/  |  `It's no good going home to practise
email:[EMAIL PROTECTED]  |   a Special Outdoor Song which Has To Be
work tel:(0117) 954-5250   |   Sung In The Snow' -- Winnie the Pooh

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: [Fwd: F#]

2002-05-31 Thread D. Tweed

On Fri, 31 May 2002, Manuel M. T. Chakravarty wrote:

 I think, the probelm is .NET, not Haskell.  .NET just
 doesn't deliver on its promise (= marketing hype) of
 language neutrality.  The problem is that .NET is language
 neutral only as long as all languages are sufficiently close
 to C#.  Not just Haskell, but widely used languages like C++
 run into this problem, too (see .NET's Managed C++).

That may (or may not) be the case; I don't know. I was more wondering
about `what really makes it so daunting for some working at a Microsoft
(and who thus has more knowledge available about .NET than external
people) to implement a Haskell for .NET, especially given the existance of
F#?'

One of the thoughts behind this was the knowledge that it's just the two
Simons' at Microsoft Cambridge now maintaining/developing GHC; _if it were
possible_ (and I'll quite concede it may not be) to leverage work on .NET
for other purposes (particularly if .NET actually fulfills one of its
`promises' to be OS neutral) to decrease the amount of work to keep one of
the two Haskell remaining compilers (GHC, NHC) viable and up-to-date.

___cheers,_dave_
www.cs.bris.ac.uk/~tweed/  |  `It's no good going home to practise
email:[EMAIL PROTECTED]  |   a Special Outdoor Song which Has To Be
work tel:(0117) 954-5250   |   Sung In The Snow' -- Winnie the Pooh


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: layout rule infelicity

2002-05-30 Thread D. Tweed

On Thu, 30 May 2002, Ashley Yakeley wrote:

 it). Certainly I find {;} more readable, and I suspect anyone else with a 
 C/C++/Java background (or even a Scheme/Lisp background) does too./RANT

Just a data point: I learned Basic, Pascal, Standard ML, C, Haskell, C++,
Perl, Python in that order and actively use Haskell, C++, Perl  Python at
the moment, and I find the `visual noise' of braces and semi-colons in C++
and Perl to be very irritating when, as Lennart points out, to be readable
by me my code has to embody these structures by layout. (It's primarily
the noise of all those `fun', `val' and `end's rather than deeper language
issues that put me off looking at ML again.) Indeed, I (half) there ought
to be a warning on the main page of Haskell.org saying `WARNING: Using
Haskell can lead to semi-colon blindness' since I relatively frequently
spend ten minutes trying to figure out why C++ code isn't compiling only
to realise that, whilst indented structurally the semi-colons are missing
:-S

I suspect using layout rule is forever destined to be controversial...

___cheers,_dave_
www.cs.bris.ac.uk/~tweed/  |  `It's no good going home to practise
email:[EMAIL PROTECTED]  |   a Special Outdoor Song which Has To Be
work tel:(0117) 954-5250   |   Sung In The Snow' -- Winnie the Pooh

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



RE: [Fwd: F#]

2002-05-30 Thread D. Tweed

On Thu, 30 May 2002, Don Syme wrote:

 going to provide.  Given the general complexity of GHC, the longish
 compile times and the reliance of the GHC library implementation on C
 and C libraries in so many places I decided to implement a simpler
 language from scratch.  I like the idea that a .NET compiler should be
 under 10K lines of code if at all possible, as is the case for F#.

Idle curiosity: which aspects of the Haskell language are the ones that
make it complicated -- e.g., long-time stuff like lazy evaluation,
typeclasses  inferrence, etc or newer stuff like functional dependencies,
etc or something else entirely -- and do they only make it complicated in
the context of the .NET architecture or in any implementation? (I'm just
interested in that there's little chance of Haskell becoming more
widespread if it's daunting enough to dissuade implementors.)

___cheers,_dave_
www.cs.bris.ac.uk/~tweed/  |  `It's no good going home to practise
email:[EMAIL PROTECTED]  |   a Special Outdoor Song which Has To Be
work tel:(0117) 954-5250   |   Sung In The Snow' -- Winnie the Pooh

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: What does FP do well? (was How to get functional software engineering experience?)

2002-05-16 Thread D. Tweed

On Wed, 15 May 2002, Hal Daume III wrote:

 I tend to agree.  I keep meaning for experimental purposes to define a
 list type called AList or something which is syntactically identical to
 lists (i.e., you can use the familiar (:) and [] operators/sugar), but
 gets preprocessed out as actually being implemented with an array with a
 pointer to the current element.  Especially if we use unboxed types for
 such a thing, I imagine that on many applications this would give a boost
 in performance.

As a pointer, I vagueley recall Phil Wadler's (his homepage
currently seems to be
http://www.research.avayalabs.com/user/wadler/), way back in something
like 1984, was looking at something like this. The title was something
like Listlessness is better than laziness. I never actually read a copy,
and don't know where you'd get one from, but if you are thinking about
this sort of thing semi-seriously it sounds like somehting worth
consulting.

HTH 

___cheers,_dave_
www.cs.bris.ac.uk/~tweed/  |  `It's no good going home to practise
email:[EMAIL PROTECTED]  |   a Special Outdoor Song which Has To Be
work tel:(0117) 954-5250   |   Sung In The Snow' -- Winnie the Pooh

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: e = exp 1

2002-05-02 Thread D. Tweed

On Thu, 2 May 2002, Serge D. Mechveliani wrote:

 I wrote about  e :: Double  for the Library.
 
 It can be obtained as  exp 1,  
 but I wonder whether it is good for the library to add the `e' 
 denotation.

Just a comment: my programming style (and others I've seen) use single
letters parameters a lot, so if it is added a longer name would be
preferable. This is based on the fact that I don't use `e' (as opposed to 
exp x) much.

___cheers,_dave_
www.cs.bris.ac.uk/~tweed/|`...heat generated by its microprocessors will
email:[EMAIL PROTECTED]|slope upward exponentially, reaching the power
work tel:(0117) 954-5250 |density of a nuclear reactor before 2010'-Intel

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: finding ....

2002-03-19 Thread D. Tweed

[Moved to haskell-cafe]
On Tue, 19 Mar 2002, David Sankel wrote:
  *everytime* about race conditions. (of course using
  this existFile before
  creating a temporary file is wrong, but existFile
  has *many* other
  applications)
 
 Could someone post an example of the creation of a
 temporary file where race conditions are important?

IIRC one of the classic issues is

-
myscript wants to create a temporary file called /tmp/storedStuff
being half-careful, checks file doesn't exist
(*) miniscule delay occurs
creates a new file called /tmp/storedStuff
-

creating a new file generally deletes a file of the same without causing
any errors since this is often the desired semantics

All this is ok until a malicious person manages to get

ln -s /home/sankel/reallyImportantUnbackedupData /tmp/storedStuff

in the interval denoted by (*)

___cheers,_dave_
www.cs.bris.ac.uk/~tweed/|`...heat generated by its microprocessors will
email:[EMAIL PROTECTED]|slope upward exponentially, reaching the power
work tel:(0117) 954-5250 |density of a nuclear reactor before 2010'-Intel

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: a more practical version of getLine???

2002-02-26 Thread D. Tweed

On Mon, 25 Feb 2002, Dean Herington wrote:

 If you're using GHC, take a look at module Readline in the util package
 (http://www.haskell.org/ghc/docs/latest/set/readline.html).  I don't know
 which other Haskell systems support this module.

The annoying thing is the way that terminals generally act on the ASCII
codes so it looks like the delete is working. Readline library sounds like
the much the best option; however if it doesn't work it is possible to
write a function which postprocesses the returned string and acts on the
ASCII codes for backspace, arrow movement, etc, although it's a bit
complex as you've got to get easy deletion of points both immediately
before and after the point you are considering in the string.

___cheers,_dave_
www.cs.bris.ac.uk/~tweed/|`...heat generated by its microprocessors will
email:[EMAIL PROTECTED]|slope upward exponentially, reaching the power
work tel:(0117) 954-5250 |density of a nuclear reactor before 2010'-Intel

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: allowing non-sequentiality in IO

2002-02-16 Thread D. Tweed

On Sat, 16 Feb 2002, Hal Daume III wrote:

 The reason I ask is that I'm generating a FSM description file and it
 doesn't matter which order I list the transitions in.  I'm curious whether
 I could get the program to run any faster if I don't care about order.

I'm a bit confused here: assuming that by faster you mean decreased
_total_ run-time (and not becasuse, e.g., you're piping the input to
another program and want an description lines to be produced at regular
intervals), why is the IO monad ordering going to be the big factor in
what determines this? I wouldn't imagine that in a non-concurrent Haskell
program there's any `locking' of IO resources so there can't be contention
for that. I can imagine that you could dramatically decrease run-time on
machines with low memories by re-ordering some computations so that the
heap size is always small (and so you get less swapping) rather than
generating lots of heap items long before they are used, this would
presumably be going on in the standard functional parts of your program.
And here you only ever get (ignoring the results of strictness analysis)
Haskell's standard evaluation order. (I.e., you could write a monadic
wrapper which processes a list of lazily produced transition lines,
outputting each new line as it becomes available, but I can't think of any
way to write the functional part of the computation so that it constructs
the list of transitions in such a way that each line is constructed as
early as possible.)

___cheers,_dave_
www.cs.bris.ac.uk/~tweed/|`...heat generated by its microprocessors will
email:[EMAIL PROTECTED]|slope upward exponentially, reaching the power
work tel:(0117) 954-5250 |density of a nuclear reactor before 2010'-Intel

___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Re: syntax...(strings/interpolation/here docs)

2002-02-13 Thread D. Tweed

On Wed, 13 Feb 2002, David Feuer wrote:

  It would be a *signifigant* boon to those
  of us trying to get haskell into organizations
  by using it as maintainable perl/sh, and
 
 Haskell is not a maintainable perl/sh.  It is not a good 
 language for simple shell scripts, and is not good for 
 string processing either.  Have you tried Scsh?  Or 
 Python?  I don't think that supporting string hacking is a 
 major goal for Haskell.

It does rather depend on what you're doing with perl -- if you're using it
very much as a skeleton for firing off lots of other programs or doing
stuff that relies on a high-level of ability with the filesystem (e.g.,
recursing over directory trees) then I don't think any of the existing
systems are good for this, and I doubt they would ever be as useful as
perl. But if you're doing something more like prototyping an algorithm
which is mildly complicated then the kind of things that make Perl/Python
nice (e.g., freedom from the excessive typing needed by C etc has vanished
(albeit for different reasons), garbage collection, higher order
functions) start to apply to Haskell. To make this concrete I have two
programs which were initially written in Perl for speed (one of them is
the makefile generator that crops up in all my bug reports :-) ) which got
confusing and tortured enough in Perl that I moved them to Haskell. I
think the two big disadvantages wrt Perl are (1) the comparative scarcity
and paucity of libraries (particularly one which ran under Haskell 98 and
gave you the equivalent of a Perl hash would be very useful) and (2) the
way Perl is constructed to keep going in the face of problems like
undefined variables, etc, which would crash a Haskell script. For proper,
thoroughly debugged and tested programs (2) doesn't really matter but I
can see it's useful for mod_perl scripts in Apache (say).

___cheers,_dave_
www.cs.bris.ac.uk/~tweed/|`...heat generated by its microprocessors will
email:[EMAIL PROTECTED]|slope upward exponentially, reaching the power
work tel:(0117) 954-5250 |density of a nuclear reactor before 2010'-Intel


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: character syntax

2002-02-07 Thread D. Tweed

On 7 Feb 2002, Ian Zimmerman wrote:

 
 itz All this taken together, I mean, _really_, is the lexical
 itz structure of Haskell a botch, or what?
 
 Jon No. Innovative. All the problems described in this thread reflect
 Jon unwarranted assumptions inherited in emacs. It's plainly possible
 Jon to parse Haskell, and not hard either.
 
 First, parsing of a complete program (eg. by a compiler) is quite
 different from parsing a buffer that is being edited by a human.  The
 latter is hard, even for fairly well-specified languages.
 Irregularities only make it harder.

Just to show that an opposite point of view is possible: I've recently
being thinking about trying to use an ML dialect as part of some
interlanguage-prototyping that I'd like to do, since it seems easier to
find descriptions of interfacing into it from outside that seem
comprehensible to me. I originally learned ML before Haskell, and I
imagine that after a little while relearning things aren't lazy and that
I shouldn't curry functions unless I need to I'd probably get back into
it. But every time I look at it I just get put off by the (IMO) truly
awful syntax which is both verbose and seems designed for machine
parsing to the detriment of easy human understandability (e.g., ~ for 
unary minus and the #'c' character literal syntax and those damned
end's for terminating let-in constructions). And this is
quite important to me because I spend a lot of time reading and thinking
about code (particularly paper printouts) and not that much time doing
clever emacs searches. I will probably try again to get back into ML, but
it will be whilst suppressing feelings of frustration about the syntax.

 Second, this argument would be easier to accept if there in fact were
 an equally innovative tool capable of providing all the editing
 goodies Emacs normally does, for Haskell.  But I don't know of one,
 even now, 10 years or so after Haskell's birth.

That may be more indicative of the fact that few people in the community
find writing editing modes to be interesting things to do, and that emacs
is still using parsing tricks that made sense when people were editing on
slow, time-shared machines but not when the typical desktop machine is at
least a 200MHz pentium. There was recently a PhD position advertised
on the list in the area of refactoring functional programs; I'd be
surprised if whoever does that doesn't eventually end up with a GUI
(whether inherited from somewhere else or written as part of the project).

___cheers,_dave_
www.cs.bris.ac.uk/~tweed/|`...heat generated by its microprocessors will
email:[EMAIL PROTECTED]|slope upward exponentially, reaching the power
work tel:(0117) 954-5250 |density of a nuclear reactor before 2010'-Intel

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: compile-time evaluation

2002-02-06 Thread D. Tweed

On Wed, 6 Feb 2002, David Feuer wrote:

 a more adventurous note, I think it would be very 
 interesting to be able to ask the compiler to make certain 
 values persistent across program executions.  This could 
 solve the same problem, but may also help deal with long 
 computations that may be interrupted, and should be 
 re-started at the same point.  (actually, I think this 

While I too would like a some way to enable `continuing where it left off'
for applications without having to actually write or design them that way.
(Particularly using laptops so much :-) ) But I think this is much better
dealt with at the OS level (I believe this is commonly called
checkpointing, something EROS (www.eros-os.org) was/is supposed to do).
That way not only is it applicable to all language programs automatically,
but it seems AFAICS the only way to tackle issues like resurrecting all
the file descriptors that a program had open, etc.

 would not be all that great... however, if non-trivial 
 memo tables ever get added to GHC, it could be _very_ 
 useful to store those across program runs for 
 simulations... that way, if the program is given input 12, 
 and told to calculate something with it and print it out, 
 if the program is interrupted, and re-started with the 
 same input, calculation could presumably start at the 
 point it left off...).  Of course, I have no clue how 
 realistic any of this is.

That's an attractive idea, but I'm not sure if it scales: I've got a
program that uses some clever kernel method to output `person' or
`non-person' over a digital surveillance sequence covering two hours. Even
with a compressing camera it's going to be at least 75MB. I run that over
six different `tapes' and the `seen input before cache' is up to half a
gigabyte.

Of course the system would be designed to have limits, but then things
start to get complicated about what they should be, how the cache eviction
policy works, etc. I think this really is only feasible where the
programmer explicitly does this (using some convenient persistence
library). Might be wrong though :-)

___cheers,_dave_
www.cs.bris.ac.uk/~tweed/|`...heat generated by its microprocessors will
email:[EMAIL PROTECTED]|slope upward exponentially, reaching the power
work tel:(0117) 954-5250 |density of a nuclear reactor before 2010'-Intel


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



RE: Programming style question

2002-01-11 Thread D. Tweed

On Thu, 10 Jan 2002, Mark P Jones wrote:

 | If I have defined a function like this..
 | f args = blah args
 | it could be re-written..
 | f = blah
[snip]
 - The second will compute a value of blah at most
   once, then cache the result for future use.  That
   could make a program run faster, but if the result
   of blah takes a lot of space, then it could result
   in a space leak.  The first might end up repeating
   the computation of blah each time f is called.
[snip] 
   Denotationally, the two expressions are the same.
   (In other words, they both produce the same value.)
   But the example above shows an operational difference
   in some implementation.  (As far as I can tell, however,
   nothing in the language definition either guarantees or
   prevents such behavior.)

Even sillier question: there's no other way of getting the optimization
that normCorr' has over normCorr (as always on the understanding it may
be a space leak) in Haskell?  

dotProd xs ys=sum(zipWith (*) xs ys)

normCorr :: Floating a = [a] - [a] - a
normCorr xs ys =(dotProd xs ys)/(sqrt((dotProd xs xs)*(dotProd ys ys)))

normCorr' :: Floating a = [a] - [a] - a
normCorr' xs=let e=sqrt(dotProd xs xs)
 in \ys-(dotProd xs ys)/(e*(sqrt(dotProd ys ys)))

for use in, say, corrWithSimpleSignal = normCorr' [1..100] (this is a
contrived example I admit)

I sometimes write such things but it doesn't leap out at me on rereading
the code later why I've defined e only to have it used (on first glance)
only once...

___cheers,_dave_
www.cs.bris.ac.uk/~tweed/|`...heat generated by its microprocessors will
email:[EMAIL PROTECTED]|slope upward exponentially, reaching the power
work tel:(0117) 954-5250 |density of a nuclear reactor before 2010'-Intel


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Questions about sharing

2001-12-07 Thread D. Tweed

On Fri, 7 Dec 2001, Adrian Hey wrote:

 The first is..
 Does the compiler keep a unique copy of expressions which consist of just
 a single zero arity constructor (eg. [],True,Nothing..) as a CAF which is
 referenced each time the constructor appears in an expression, or does it
 duplicate the constructor (expression) each time it's used.
 Maybe I should define my own CAF at the top level and use it instead?
 (or perhaps they're unboxed somehow?)

Really idle curiosity... why would having a single copy of a zero arity
constructor be more efficient than have multiple copies? Wouldn't they fit
into a `cell' which wouldn't be larger than the one that would
(IIRC) be used for the indirection to the CAF? (I can understand a larger
CAF being a win, but one this small?)

___cheers,_dave_
www.cs.bris.ac.uk/~tweed/|`...heat generated by its microprocessors will
email:[EMAIL PROTECTED]|slope upward exponentially, reaching the power
work tel:(0117) 954-5250 |density of a nuclear reactor before 2010'-Intel


___
Glasgow-haskell-users mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users



Re: Having contraints like 'Fractional Int = IO ()'

2001-11-13 Thread D. Tweed

On Tue, 13 Nov 2001, Jesper Louis Andersen wrote:

 This problem has had my attention for a while now. I hope someone would
 like to help me out on the problem.
 
 I have a simple average function defined as:
 
 mean:: (Fractional a) = [a] - a
 mean l  = (sum l)/ fromIntegral (length l)
 
 Which imposes a type constraint (of course). My problem is, that i
 cannot ``let go'' of this constraint. The constraint travels through the
 call tree of my functions and ends at the top-level function, which is
 defined as a Monad (I need some IO operations on some files). Now, I
 have tried to solve the problem, but the closest I have got is to make
 my main function be of type:
 
 main  :: Fractional Int = IO ()

In principle a type constraint is discharged (I hope this is the correct 
word) in the roughly the same way as a general polymorphic type, i.e.,

f :: a - a f :: Eq a = a - a

g (x::Int) = f xg (x::Int) = f x

======
a `instantiated' here as Inta `instantiated' here as Int
f used at type f :: Int - Int  check:allowed as Int is instance of Eq
g has type g :: Int - Int  f used at type f :: Int - Int
g has type g :: Int - Int

so that at some point the somewhere higher up in the call tree the
polymorphism is `made concrete' and type class constraints are discharged
by some combination of type signature information, manual typing of
expressions with :: as above and pattern match data. (This may not be the
concrete terminology; I'm only a Haskell hobbyist.)

So it sounds like you're using polymorphic functions (and hence having
undischarged type constraints) all the way to the top of your monad, when
you really need (for the program to make sense) to reduce things to
conrete types at some point. E.g.,I can imagine you can use read on a
string derived from the file in such a way that the type it gives back is
`Fractional a = a' when it actually ought to be specified as returning a
`double'. However, this guess may be wrong.

 they seem to have bitten me hard. If anyone could point to the relevant
 information, it would be very nice.

I can't immediately think of a good source of information about type-class
issues in particular, but the gentle introduction and any haskell textbook
should mention it.

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm |tweed's law:  however many computers
email: [EMAIL PROTECTED]  |   you have, half your time is spent
work tel: (0117) 954-5250   |   waiting for compilations to finish.


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: = vs -

2001-10-10 Thread D. Tweed

On 10 Oct 2001, Ketil Malde wrote:

 Mark Carroll [EMAIL PROTECTED] writes:
 
  On Tue, 9 Oct 2001, Ashley Yakeley wrote:
 
  At 2001-10-09 11:55, Mark Carroll wrote:
 
  What is the rationale for when Haskell demands a = and when it
  demands a -?
 
 Okay, I can't give you anything formal, but here's my intuitive
 understanding of things
 
  e.g. 
 
  x :: Integer - Integer
 
 A function from and Integer to an Integer.  Even more obvious if you
 have one more parameter:
 
   g :: Integer - Integer - Integer
 
 g takes an Integer and returns a function that takes an Integer and
 returns an Integer.  Equals-assignment would be very non-intuitive
 here. 

As I understand it, the equals sign is used whenever the item on both
sides are equal, i.e., one side can be replaced with the other without
changing meaning. Of course, in the case of a function definition it's the
degenerate equality you get from defining the lhs in terms of the rhs. The
- is used whenever you've got something on the right that `leads to' to
something on the left, eg

  case x of
  Maybe y -True
  Nothing -False

It is not the case that `Maybe y' is the same as True, so = is clearly
inappropraite. Likewise for lambdas (\x-x+2 doesn't have x = x+2).

It's perhaps less clear because after using functional languages for any
length of time you get very used to thinking of function definitions as a
restricted kind of rewrite rule, and rewrite rules may not necessarily 
have any  connection to a notion of equality.

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm |tweed's law:  however many computers
email: [EMAIL PROTECTED]  |   you have, half your time is spent
work tel: (0117) 954-5250   |   waiting for compilations to finish.


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: = vs -

2001-10-10 Thread D. Tweed

On Wed, 10 Oct 2001, D. Tweed wrote:

 degenerate equality you get from defining the lhs in terms of the rhs. The
 - is used whenever you've got something on the right that `leads to' to
^left
 something on the left, eg
 ^right

Being bad on these elementary terms makes using foldr, foldl, etc a
bit difficult for me :-S

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm |tweed's law:  however many computers
email: [EMAIL PROTECTED]  |   you have, half your time is spent
work tel: (0117) 954-5250   |   waiting for compilations to finish.


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: = vs -

2001-10-10 Thread D. Tweed

On Wed, 10 Oct 2001, Mark Carroll wrote:

 On 10 Oct 2001, Ketil Malde wrote:
 (snip)
  function definitions.  Perhaps one could have had a syntax like
  
  z a =
| a == 1 - 1
| a == 2 - 3
  
  instead, as it'd make it more consisten with the case, but I suppose
  there's a reason for it being the way it is.  The case statement is an
 (snip)
 
 Ah, yes - it was this 'discrepancy' that was one of the sources of my
 confusion, as a == 1 obviously doesn't 'equal' 1.

I think this comes about from history; in the functional languages like
Miranda  Orwell that preceded Haskell an extended version of the function
above would have been written

  z a = 1   if a==1
  = 2   if a==2
  = 3   otherwise

which looks a lot like traditional mathematics and where the equals makes
sense. I'm not sure why anymore but Haskell changed the `if clause after
the value' to `pattern guard | before =', so I agree it now looks as if
it's stating that the pattern guard is equal to the rhs. 

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm |tweed's law:  however many computers
email: [EMAIL PROTECTED]  |   you have, half your time is spent
work tel: (0117) 954-5250   |   waiting for compilations to finish.


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Student Programming Projects

2001-09-21 Thread D. Tweed

  Next Semester, I am supposed to teach a short course in Haskell.
  Can anyone recommend interesting programming projects which can
  be completed in about a month? Thank you very much.

This doesn't come from direct experience and you don't specify what the
students will already know, whether they're all `good' programmers,etc
but...

Maybe some sort of simulation/control type problem, something like say
managing a supply depot under requests for products from ultimate
consumers and issuing requests for more stock from the ultimate
manufacturers (which may be filled unreliably.) The key reason for
suggesting this is it avoids being so mathematical that computer science
students who dislike maths will be turned off (at least as much :-) ), it
has a natural usage of infinite lists (consumer demand, manufacturer
responses) and has lots of opportunities for smart-alecks to show
off whilst not being impossible for a weaker student to complete usefully
(and in an extreme case you could write an overall framework into which
really weak students can write components to be plugged in, thus allowing
them to demonstrate some `micro' grasp of haskell when the haven't got a
macro grasp.)

(The weaker students bit is more from a belief that a 40% student should
be able to get a 40% mark on some coursework, rather than it being
impossible to get less than about 80% because the project leads to
programs which either work almost perfectly or not at all. I'm not
suggesting making it ridiculously easy.)


___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm |tweed's law:  however many computers
email: [EMAIL PROTECTED]  |   you have, half your time is spent
work tel: (0117) 954-5250   |   waiting for compilations to finish.



___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: The future of Haskell discussion

2001-09-14 Thread D. Tweed

As a general question (and forgive my ignorance): are the various ffi's
implemented using something like `dlopen' or are they done by actually
putting suitable stubs into the Haskell generated C-code which then gets
compiled by the C compiler as part of the overall haskell compilation?

On 14 Sep 2001, Marcin 'Qrczak' Kowalczyk wrote:

 I think it should be easy to add support for C++, except exceptions.
 
 There are two approaches: call C++ functions directly (it requires
 implementing name mangling by the Haskell compiler; there are several
 name mangling schemes and gcc changed its scheme in version 3.0)
 or write C wrappers (this is inconvenient but is doable now without
 compiler support).

I suspect that there's several different levels of C++ interface; here's
mty personal taxonomy (which other people may disagree with :-) ) :

(1) Firstly there's calling code where the interface is basically C but
compiled with
C++; at this level there's the issue of name mangling (accounting for both
for argument structure and namespace effects) and any global stuff
in the C++ code (e.g., ensuring global objects construction/destruction
happens at times that are ok).

(2) Then there's being able to invoke the method of an object without
caring about moving `inside object' information back in to haskell (e.g.,
calling the colour_segment() member of an object of the Image class). Here
the object is essentially just acting as a `struct with attatched function
pointers'.

(3) Then there's being able to actually use objects more fully via a
Haskell type/type-class wrapper of some sort (so that for example objects
of C++-class X_cpp with a member function

string show(void) const

could be used in, e.g,

display :: [X_haskell] - IO()
display = sequence (map (putStr.show))

Obviously this throws up issues of the semantics that need to be solved
(e.g., can I only use const member functions, or can I use them all
providing I embed the object in a monad?) and is (I imagine) a heavy
research and implementation project.

(4) Finally there's being able to propagate C++ exceptions into Haskell,
using either Haskell exceptions or some other representation. (This is
clearly incredibly hard, but I belive some package (forget which) manages
to propagate C++ exceptions into Python exceptions.)

From my limited understanding, the really nice bits about the KDE
framework (e.g., embedding application objects inside others) would
require at least level (3) and possibly even (4).

 An annoyance is that templates can't be called directly but each
 instance must be imported separately.

Indeed, with a potential additional problem: many template functions are
written as inlines in header files, so that if I try and use a template
function I wrote years ago in a .cc file containing a C++ class I defined
yesterday I silently get the correct code compiled into the new .o
file. If I try to `glue' together a template function and a new C++ type
(where they haven't been used together otherwise) where does the new
instantiation go; do I have to go around adding explicit instatantiation
requests in the C++ source?

 On Linux it works when main() of a mixed C/C++ program is written in C.
 AFAIK it doesn't work everywhere. Nevertheless an example I've now
 made worked.
 
 hsc2hs and ghc need to be extended to make it work smoothly. hsc2hs
 produces a file with extension .c and ghc compiles these files by
 passing -x c options to the C compiler, so even if a C++ compiler is
 substituted, it is compiled as C. There should be a switch in hsc2hs
 to let it produce C++ and ghc should recognize .cc extension, or
 in some other way ghc should be informed that the .c file is really
 C++. Option -pgmlg++ causes ghc to link using g++; option -lstdc++
 instead also works. And hsc2hs should be taught about extern C.

All these are useful things; however I'm just pointing out that there's
various degrees of interoperation with C++. My personal position at the
moment is that I want to be able to use the level 1 facilities above with
minimal effort (and to be able to call in to Haskell from C++, but that's
another story) for program development purposes. If there's
any coding that better informed people can suggest to make
interfacing with C++ easier I can try and help with it, but unfortunately
(1) I'm unreliable; (2) I can't justify doing development on interfacing
C++-Haskell as part of my job so it'd only be during my very scarce free
time; (3) did I mention I'm incredibly unreliable? 

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm |tweed's law:  however many computers
email: [EMAIL PROTECTED]  |   you have, half your time is spent
work tel: (0117) 954-5250   |   waiting for compilations to finish.


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: newbie conceptual question [from haskell list]

2001-07-27 Thread D. Tweed

Important confession since Fergus is in the discussion: I've not actually
read any of the C or C++ standards; I've got an impression of what they
say from various textbooks and the gcc mailing lists.

On Fri, 27 Jul 2001, Fergus Henderson wrote:

 But there are so *many* such stupidities.
 
 If you took out all of the things in C that had
 implementation-defined, unspecified, or undefined behaviour,
 then what you'd be left with wouldn't be C.

 If you were talking about some other C-like language, such as
 LP-C, Java, or C#, then I think your point might be valid.
 But the differences between C and higher-level imperative (or OOP)
 languages should not be ignored.

I'm sure you're right. However, this seems to fit paradoxically with my
personal experience, and that of the people that I discuss this stuff with
at work. This may be something to do with the fact that we're developing
programs of the scientific kind (e.g., image segmentation) and that
virtually everyone tends to unconsciously follow the KISS (Keep it simple,
stupid) principle: our goal is very much to produce programs that `do cool
things' rather than `are cool due to the way they are coded'. And I
_still_ don't think that there are that many bugs that hit me due to
implementation definedness or weird corner-cases in the syntax. If you had
a time machine, took a piece of code that I'll write in the future with a
bug in it and asked me to figure out what the bug in there was I bet that,
in most cases given just enough time, patience, paper and coffee I could
find the bug without a computer and it wouldn't be involve the compiler
defining something one way whereas I thought the standard defined it
another. I generally don't produce programs with bugs because the compiler
implements some particular construct differently to the way that I think
it sould but because I can't keep the whole program and the interactions
between all its parts straight in my very limited mind.

(I feel like a bee-keeper at apocryphal dinner party where it's said
that by the laws of physics bees can't fly: all the other parties in this
discussion are much more technically competent and well-read than me and
yet my actual experiences don't seem to accord with their conclusions
:-) )

  Likewise, C and
  C++ have specifications which are met to a reasonable degree by many
  compilers out there, which prompts the question: when was the last time a
  bug in your code in an imperative language was due to an
  implmentation-defined part of the language?  In 5 years of intensive C++
  programming I'd guess I've made maybe 25 bugs due to this, but the number
  of bugs I've fixed in my code must be in the tens of thousands.
 
 If you also include areas which are unspecified or undefined, as well
 as those that are implementation-defined, I think it would cover a
 very substantial proportion of those tens of thousands of bugs.
 
 How many times have you written code that accidentally referenced
 an uninitialized variable, for example?  Or that accessed memory
 after it had already been deallocated?  Or that tried to access
 past array bounds?  These are very common kinds of errors.

I haven't been very precise in my terminology (and as I say haven't
actually read the standards). When I talk about something being
implementation defined I mean that the precise details of how it is
defined in an implementation needs to be understood for a `bug-free'
program to work. I've been calling a statement (which I'm not claiming to
have read verbatim anywhere) like `accessing memory outside allocated
array bounds can non-determinisitically corrupt any part of the overall
state of the program, return correct or gibberish results or cause the
program to abort' to be perfectly reasonable defined statement of what to
expect. It can certainly be viewed as either `under'-defined, or that what
actually happens to be implementation defined. But I don't consider that
when I screw up by allocating memory outside array bounds to be a bug due
to implementation-definedness, it's a bug because I didn't spot some
interaction that caused the variable that I was accessing the array at to
become a value that was to big.

But to some extent I've been arguing about C just because I think I'm
right rather than because I think it's important to considering the
overall question. Suppose that I'm working in Java. I'm told that if an
access is made to a index outside the range of an array it throws an
exception. How would you classify the following situation:

* within some routine f:
* a variable v is set up which holds the (valid) index at which I want to
  access the array at the start of the routine.
* somewhere deep in a lot of code something ERRONEOUSLY assigns a value to
  that variable v which is outside the range of the array.
* a read of the array at index v is attempted. An exception is thrown.
* an catch block governing f catches the exception and prints out `Attempt
  to access array outside its 

RE: newbie conceptual question [from haskell list]

2001-07-26 Thread D. Tweed

On Thu, 26 Jul 2001, Frank Atanassow wrote:
 also safety, and theorems for free. Then there are other properties
which
 are obvious (to a programmer) in a Haskell program which get buried in
the
 equivalent C(++) program, e.g., that every member of a data structure is
 traversed in a fold (no early exits). Many of these things hinge on
the
 typing of a program, which is inferred and checked for you, so there is
less
 of a burden on conscientious programmers.

I'm being terribly unfair here; this was probably just a simple slip when
writing a hurried e-mail but if you mean what I think you mean about the
fold:

undefd = undefd

f y x|y=='a' = finished
 |otherwise = y:x

g xs = foldr f  xs

Main g ('a':undefd)
finished

shows that folds can exit early; if it didn't it'd black hole forever.

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm |tweed's law:  however many computers
email: [EMAIL PROTECTED]  |   you have, half your time is spent
work tel: (0117) 954-5250   |   waiting for compilations to finish.


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



RE: newbie conceptual question [from haskell list]

2001-07-25 Thread D. Tweed

I'd like to respectfully disagree with some of this :-)

On Wed, 25 Jul 2001, Frank Atanassow wrote:

 These things are nice, but the more I learn about functional languages, the
 more I feel that they are only icing on the cake. The most important thing
 about functional languages is that we know _what they are_, and that they
 can be characterized in a regular fashion which is amenable to analysis; in
 other words, we know how to reason about them, and what sorts of properties
 functional programs satisfy.

Umm, I wouldn't agree with that exactly. If you ignore stupidities like
specifying the mod operator % to be implementation defined, I would claim
that C has/can be given semantics which are virtually as simple as
Haskell. Equally I'd argue that we know as much about how to reason about
imperative languages as we do functional ones. The point is that those
semantics don't split the problem of reasoning about what's going on into
nice independent subproblems like the semantics of Haskell do. Equally,
I'd agree we don't know how to perform _effective_ reasoning on them,
probably because it's pretty much impossible without restricting the set
of allowable programs.

 Saying that a language satisfies nice properties sounds abstract and
 useless to some people who just want to get things done, but I really
 believe that it helps not only researchers who know know how to read and use
 formal semantics, but also everyday programmers, because they absorb some of
 these reasoning methods and then start using them intuitively and
 unconsciously. For example, every Haskell programmer has some grasp of
 referential transparency, and uses it implicitly every time he thinks about
 how to structure or rearrange his programs.

Again, as a C++ programmer I have some grasp of what program
rearrangements are valid (E.g.,I can't move an assignment involving an
expression in another variable, say v, from before an assignment to v to
after an assignment to v), and I use it implicitly every time I think
about structuring or rearranging my programs. The problem is that that
doesn't really help because determining whether there's an assignment to
variable v between where an assignment is now and where I want to move it
to is `decidedly non-trivial'.

What I'm basically trying to say is that I don't think it's `simplicity of
semantics' that's the FP advantage but the _effectiveness in the world of
people writing programs_ of those semantics. (Indeed I vaguely remember
reading that John McCarthy didn't develop LISP as a computational
realization of the existing theory of lambda calculus but rather because
he thought programming with only functions would be an effective way of
doing things. The `lambda' was an accident; a friend gave him a book on
Church's theory late in the development and he admitted he didn't really
understand it but thought the name lambda for the function definition
construct was `cool'.) And that advantage comes partly by restricting
yourself to programs that allow this kind of reasoning; this is no
different from the fact that I won't drive after having drunk any alcohol
even though I might well in actuallity still be safe because it's easier
to follow that rule than to figure out if I'm genuinely safe.

 Of course, this argument holds for any simple language, not just
 functional ones, where by simple I mean easy to reason about. For

As I say `easy to _effectively_ reason about' =/= simple semantics.
I'm making this point because the natural rejoinder from a C programmer
being told that C (as an imperative language) has more complex semantics
than Haskell is to say `no it doesn't', which I agree with, the point
being that Haskell has a semantics which makes it easier to reason about
effectively. (I'm glossing over the distinction between the various
kinds of semantics for a language such as denotational and operational
here.)

 example, there are concurrent calculi of processes, object calculi and
 first-order algebraic languages which also share this property, and I prefer
 any of them to warty conventional languages. (And this is why I also like
 SML: even though it's not referentially transparent like Haskell, I know
 what sorts of properties it satisfies.)

Now this to me is a really interesting statement. IIRC from many years ago
theres not necessarily an indication in the type system when a function
involves references. Do you actually reason (albeit subconciously) using a
full semantics which includes possible effects due to aliasing of
references or changes to their contents somewhere deep in the functions,
or do you use a rule `well this is 99% likely to be referentially
transparent so I'll assume it is, but I'll keep in the back of my mind the
possibility this assumption may be wrong'? (That's what I'd end up doing I
think.)

 Haskell (and ML) really is different. First, it's not
 implementation-defined.

Umm, again I'd say this is debatable: the volume of e-mail to/from Simon
PJ about the 

Re: Templates in FPL?

2001-05-23 Thread D. Tweed

On 22 May 2001, Carl R. Witty wrote:

 D. Tweed [EMAIL PROTECTED] writes:
 
  In my experience the C++ idiom `you only pay for what you use' (==
  templates are essentially type-checked macros) and the fact most compilers
  are evolved from C compilers makes working with templates a real pain in
  practice.
 
 I'm not sure what you mean by type-checked here.  Templates are not
 type-checked at definition time, but are type-checked when they are
 used; the same is true of ordinary macros.

I was thinking in terms of (to take a really simple example)

templateclass T
void
initialiseArray(T** arr,const T elt,int bnd)
{
  for(int i=0;ibnd;++i){
arr[i]=elt.realValue();  //   -
  }
}

If I try and use intialiseArray(obj of type foo,obj of type bar), with
the template function I get the error when passing in the parameters; with
a macro the type error would appear at the indicated line, and indeed if
by pure chance bar just happened to have a member function called
realValue then I wouldn't get one at all. In my view it's reasonable
to say that template functions are type-checked in essentially the same
way as general functions, giving type errors in terms of the source
code that you're using, whereas there's no way to ignore the fact
macros dump a lot of code at the calling point in your program (mapping
certain identifiers to local identifiers) and then type-checking in
the context of the calling point. The above code is a really contrived
example, but I do genuinely find it useful that template type-error
messages  are essentially normal function type-error messages.

[As an aside to Marcin, one of my vague plans (once I've been viva'd,
etc) is to have a go at writing some Haskell code that attempts to try and
simplify g++ template type errors, e.g., taking in a nasty screen long
error message and saying

`This looks like a lista  lista* mistake'

Like everything I plan to do it may not materialise though.]

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm|tweed's law:  however many computers
email: [EMAIL PROTECTED] |you have, half your time is spent
work tel: (0117) 954-5250  |waiting for compilations to finish.



___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Templates in FPL?

2001-05-18 Thread D. Tweed

(This response comes from the context of someone who like FP but has a day
job writing in C++.)

On Fri, 18 May 2001, Jerzy Karczmarczuk wrote:

 We know that a good part of top-down polymorphism (don't ask me what
 do I mean by that...) in C++ is emulated using templates.

Umm... what do you mean by `top down'? The two senses in which I use
polymorphism in C++ are:

(i) using super classes (in the sence that A is a superclass of B is B
inherits from A) to perform operations that only make sense for objects of
the semantic type A. My canonical example is working with quadtree nodes,
where a there are various types nodes which can hold different kinds of
data but the base class qnode holds things like position, etc.

(ii) using templates to write generic algorithms which either don't depend
on the object type at all (e.g., vectors, lists, etc) or which depend on
one or two conceptual operations which are very elementary (e.g.,
meaningful equality, addition or a `display yourself on stdout' function).
In C++ these could _almost_ be done using superclasses but with problems
because

 (A) the standard types (e.g., int) can't have their place in the class
 hierarchy redefined (e.g., int inherits from debug_show)

 (B) deep hierarchies are really not handled well by debuggers and it'd be
 a real pain trying to look at the C++ equivalent of the haskell
 `data A = ... deriving (Show,Ord,Monad,...)'

The almost that I referred to above comes from the fact that
(AFAIK) without using something nasty like RTTI you can't create new
objects of the true type in a function to which you've passed a pointer to
the base class.

That's the conceptual overview. In terms of pragmatics __at least for my
work in image processing__ I don't write classes that much, seldom use
inheritance and I'm 99% sure I don't have any inheritance more than 1
level deep. On the other hand I write a lot of templated code which would
be equivalent in haskell to either

f :: a - b -  - a

or

f :: Eq a = a - b - ... - a

(i.e., algorithms that need just one or two low-level ideas such
as equality). I much prefer the Haskell way of doing this with
superclasses but I just don't feel brave enough to attempt the levels of
class hierarchy that this implies in C++. (In some ways this is a shame
since, because in template function names member functions used are only
matched syntactically -- as there's no superclass to ensure semantic
equivalence -- I've made one or two mistakes when I forgot a member
function with the same name actually meant something different.)  

 Always when somebody mentions templates in presence of a True Functionalist
 Sectarian, the reaction is What!? Abomination!!.
 
 Now the question: WHY?
 
 Why so many people say the C++ templates are *wrong* (and at the same time
 so many people use it every day...)

In my experience the C++ idiom `you only pay for what you use' (==
templates are essentially type-checked macros) and the fact most compilers
are evolved from C compilers makes working with templates a real pain in
practice. Additionally, I think a lot of people who dislike them are
old-time C hackers who don't see why it isn't good enough to do
polymorphism via void*'s and function pointers. As to why I at least use
it, it lets me write polymorphic functions without needing deep class
hierarchies (which as noted above aren't nice in C++) or going into the
error prone mire of the void* approach.

 Is it absolutely senseless to make a functional language with templates?
 Or it is just out of fashion, and difficult to implement?

I may be missing something obvious, but given Haskell's well thought out
(IMO) prelude class structure and the fact that deep class hierarchies
with `multiple inheritance' aren't a problem in Haskell, I don't see that
it would buy you anything in Haskell. On the other hand, if the standard
prelude didn't have the class hierarchy I think they would be much more
useful.

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm|tweed's law:  however many computers
email: [EMAIL PROTECTED] |you have, half your time is spent
work tel: (0117) 954-5250  |waiting for compilations to finish.


___
Haskell mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell



Re: Inferring from context declarations

2001-02-21 Thread D. Tweed

George Russell wrote:
 
 (3) Simon Peyton Jones' comments about dictionary passing are a red herring,
 since they assume a particular form of compiler.  Various (MLj, MLton)
 ML compilers already inline out all polymorphism. Some C++ compilers/linkers
 do it in a rather crude way as well, for templates.  If you can do it,
 you can forget about dictionary passing.

[Standard disclaimer: I write prototype code that's never `finished' to
ever-changing specs in a university environment; other people probably
view things differently.]

I'm not sure I'd agree about this. Note that there's two levels, inlining
polymorphic functions at the call site and `instantiating polymorphic
functions at each usage type' without doing the inlining. C++ compilers
have to at least do the second because of the prevailing philosophy of
what templates are (i.e., that they're safer function-macros). Some of the
time this is what's wanted, but sometimes it imposes annoying compilation
issues (the source code of the polymorphic function has to be available
everytime you want to use the function on a new class, even if its not
time critical, which isn't the case for Haskell). I also often
write/generate very large polymorphic functions that in an ideal world
(where compilers are can do _serious, serious_ magic) I'd prefer to work
using something similar to a dictionary passing implementation. I'd argue
that keeping flexibility about polymorphic function implementation (which
assumes some default but can be overridden by the programmer) in Haskell
compilers is a Good Thing.

Given that, unless computing hardware really revolutionises, the
`speed/memory' profile of todays desktop PC is going to recurr in wearable
computers/PDAs/etc I believe that in 20 years time we'll still be figuring
out the same trade-offs, and so need to keep flexibility.

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm|tweed's law:  however many computers
email: [EMAIL PROTECTED] |you have, half your time is spent
work tel: (0117) 954-5250  |waiting for compilations to finish.



___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Specifications of 'any', 'all', 'findIndices'

2001-01-23 Thread D. Tweed

On Tue, 23 Jan 2001, Mark Tullsen wrote:

 Johannes Waldmann wrote:
  ...
  I'd rather write clear code, than worry about efficiency too early.
  Who said this, "premature optimization is the root of all evil".
 
 I've always attributed this to Donald Knuth:
 
   Premature optimization is the root of all evil in programming.

In his paper on the errors of TeX (no proper ref but it's reprinted in his
book on Literate Programming) he calls it Hoare's dictum (i.e. Tony
Hoare) although the context suggests that this isn't an `official
name'. Dunno if Hoare heard it from someone else though...

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm|tweed's law:  however many computers
email: [EMAIL PROTECTED] | you have, half your time is spent work tel:
(0117) 954-5250 | waiting for compilations to finish.


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



Re: Will Haskell be commercialized in the future?

2000-11-27 Thread D. Tweed

On Mon, 27 Nov 2000, Frank Atanassow wrote:

  Java.  Do you think that Haskell would be better without `unsafePerformIO'?
 
 Without remarking on C#, I just wanted to point out that unsafePerformIO is
 not part of the Haskell language...

Umm, I hope that everyone in the implementors camps feels unsafePerformIO
is a de facto (if not de jure) part of the haskell libraries. I use it an
awful lot, and ironically not to do `imperative' type things but rather to
deal with the case where files on disk, etc, are static over the entire
program lifetime, so that their value can unambiguously be taken to be
their contents, etc. In some ways it's aesthetically annoying that the
same function name is used for both situations where IO isn't strictly
ordered and you don't care if this means you get different file contents
depending on when the read happens to occur, and when a file is
essentially a `raw string CAF that happens to be on disk rather than
compiled in'.

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm|tweed's law:  however many computers
email: [EMAIL PROTECTED] |you have, half your time is spent
work tel: (0117) 954-5250  |waiting for compilations to finish.


___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe



RE: The importance and relevance of FP

2000-08-19 Thread D. Tweed

On Fri, 18 Aug 2000, Doug Ransom wrote:

 I do believe FP is current 90 degrees out of phase with OO.  I think the
 isue with tuples, lists, conses, etc. it the big problem.  I currently see
 no way for someone to write a clever matrix library in Haskell and have it
 seamlessly integrate into the NGWS framework (the new object type and
 runtime system from MS likely here to stay for a long long time) and C#, VB,
 etc.  The  impedance mismatch is just too high.

I can't quite see why you say (at least twice) `is 90 degrees out of phase
with OO' rather than `90 degrees out of phase with lots of assumptions 
attitudes prevailing in mainstream (imperative) programming', and surely
that's the big problem? I'm just pointing this out because I do believe
that (de-hyped) OO is a useful  productive way of programming some kinds
of thing, and that both type classes and the object oriented haskell
extensions (nb: only read the reports, not actually progammed with
these firsthand) are useful for solving problems using Haskell.

However, the issue that lots of the simple  productive ideas
from FP are culturally alien  even suspect to programmers in other
languages is very true. I write lots of stuff in C++ and the fact that I
have functions returning two results return a pairT,U rather than either
a specially created class or a copy-into-reference-parameter, or that I
use STL vectors/lists rather than writing inplace code using
arrays/pointers elicits cries of `Yuck'  `Too weird'. And I agree that
this is a real phenomenon/problem which may well lead to Haskell remaining
a language that's scorned (in both senses) and runtime systems which make
it difficult to do simple Haskell things.

But unless I'm missing something, this has nothing in particular to do
with OO. (I may well be: I know nothing of NGWS)

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm|tweed's law:  however many computers
email: [EMAIL PROTECTED] |you have, half your time is spent
work tel: (0117) 954-5250  |waiting for compilations to finish.





Re: Library conventions

2000-06-27 Thread D. Tweed

On Tue, 27 Jun 2000, Lennart Augustsson wrote:

  Using `Left' and
  `Right' for such cases is fundamentally confusing since it is not
  clear what the meaning of `Left' and `Right' is.
 Well, I don't totally agree.  Anyone using Right for Wrong deserves to
 have his/her head examined. :)

I probably deserve to have my head examined for different reasons :), but
lots of my code uses Left - ok result, Right - error indication and I'm
a native english speaker. I think the reason is the same reason non-native
english speakers don't complain about how the keywords/ubiquitous `words'
in virtually every programming language are all english terms: very
quickly they just become abstract symbols based on how they look/are typed
without any connection to the natural language they were drawn from. (I've
always been puzzled by the choice of Left and Right, which imply symmetry,
when at least in the programs I write, Primary  Secondary would fit the
usage better since if there is general symmetry rather than a
`desirability ordering' that's almost always done in a new datatype.)

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm|I shoulda realised building the 
email: [EMAIL PROTECTED] |memory allocation subsytem with 
work tel: (0117) 954-5253  |--with-malicious-ai was a bad idea.






Re: mode argument

2000-06-01 Thread D. Tweed

 I knew of the namespace collision effect. 
 But one has to choose between the bad and worse.
 
 And in any case, there remain too many ways to error.
 We also may paste   
  True :: Bool  instead of  False
 (the type design does not help),
  x / 0  instead of  x / 2,   
  take n xs  instead of  take (-n) xs
 We also may forget to write a program and try to use it.

Do you really believe the general principle that `if something is
inherently error prone then something is only worth doing if it solves all
the errors completely, not if it only makes things slightly less error
prone?' That would suggest that strong typing, haskell style modules and
numerous other innovations are mistakes made in the language design
process.

 
 Generally, concerning the mode argument: looks like the Haskellites 
 avoid it. I am just trying to understand what is going on.
 
 Why people do not complain on the mode argument outside of Haskell:
  `zip -r ...'  `unzip -t xxx.zip'  `tar xvfz xxx.tar.gz'

You're talking to someone who has about a hundred one line perl scripts
that replace `command -acnse ' type things with meaningful names :-)

 And it is definitely not good to have several functions in the 
 interface instead of several mode values for one function.  

I'll admit I'm not sure either way on that one, but mode values really
should to be typed.

   data ModeQuotRem = QuotRem_minRem | QuotRem_positive | QuotRem_other
  deriving(Eq,Ord,Enum,Show,Read)
   -- contrived
 
   data Mode_sort = Sort_quick | Sort_merge | Sort_insert | Sort_other 
deriving(Eq,Ord,Enum,Show,Read)
   ...
 Again, `Positive' would not do, it should be something like  
 QuotRem_Positive, and so on.

The only collision here if you remove the QuotRem_  Sort_ prefixes comes
from the other, which seems like excess generality to me: do you really
want an option to use a QuotRem function in a mode about which you know
nothing. If Fergus' suggestion about allowing overloaded data constructors
for H-2 is allowed then I've no problem with multiple instances of
Positive since each use point have a meaning which matches the
natural/technical language meaning of `positive', and will start ringing
alarm bells in my head if what I meant was `negative'. The problem with
chars is if for example `p' means positive (ie =0) in one function and
strictly-positive (ie 0) in another: I don't think you can have both
mnemonic value and collision freeness in a namespace with effectively 26
values.

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm|I shoulda realised building the 
email: [EMAIL PROTECTED] |memory allocation subsytem with 
work tel: (0117) 954-5253  |--with-malicious-ai was a bad idea.





Re: mode in functions

2000-06-01 Thread D. Tweed

On 1 Jun 2000, Ketil Malde wrote:

 I could accept "mode flags" if the algorithm is extremely similar,
 e.g. passing a comparator function to a sort is a kind of mode flag
 (think ordered/reversed) which I think is perfectly acceptable.
 Having flags indicating algorithm to use (sort Merge (s:ss)) is IMHO
 silly. 

There are cases where functions taking mode arguments might be more easily
usable than duplicate functions (note I'm not necessarily arguing this is
one of them) because various things can be done on the mode value (eg,
equality tests, automatic enumeration, etc) which can't be done on the
functions themselves. The following bizarre code (which runs in Hugs Jan
98 at least) indicates the sort of thing


import Maybe

data FType = F | G | H deriving (Eq,Ord,Enum,Show)

operation::FType - a - Maybe a
operation H a = Just a
operation _ _ = Nothing

tryUntilSucceedWithOperation::Eq a=a-a
tryUntilSucceedWithOperation x
  =(fromJust.head.dropWhile (==Nothing).
(map (flip operation x))) (enumFrom F)

where operation is the library code and tryUntil... is the end-user code.
(Obviously operation would be something with a less predictable
succeed/fail pattern.) Then adding a new method with FType J, say, just
requires slotting J into the appropriate position in FType and everything
else works automatically. I can half see how you might be able use typed
mode arguments to do some things in a much simpler way than with complete
different functions, but I haven't the time to resolve it to a convincing
real world example, so I may be wrong :-)

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm|I shoulda realised building the 
email: [EMAIL PROTECTED] |memory allocation subsytem with 
work tel: (0117) 954-5253  |--with-malicious-ai was a bad idea.





Re: mode for standard functions

2000-05-31 Thread D. Tweed

On Wed, 31 May 2000, S.D.Mechveliani wrote:

 And we can hardly invent the mode type better than  Char,
 because any specially introduced mode types bring the long names.
 
   quotRem 'n' x (-3)   looks better than the pair  quotRem  divMod,
 and
   quotRem QuotRemSuchAndSuch x (-3)
  looks worse than  quotRem  divMod.

I suspect that such judgements aren't universal throughout the haskell
community. I prefer the QuotRemSuchAndSuch version for two reasons:

(1) Programming constantly reminds me how small my brain is and how many
trivial mistakes I make. I'm much more likely when looking over/debugging
my code to spot a trivial slip if mode type arguments explanatively
(?) named than if there's a cryptic one character mode argument there.

(2) Having an enumerated type is better because it will cut down on
`accidental namespace collisions' if I cut and paste something which
expects mode character 'n' to mean one thing into a function call where it
means something completely different.

To be fair, there are two minor cons that haven't been mentioned:

(1) Long names, haskell layout rule  current dumb editors are a match
made in hell: I want an editor which understands enough of the layout rule
to choose the most aesthetic layout as I type, even given the difficulties
of long names, with only a few minimal hints.

(2) These enumerated types presumably have to be added to explicit import
lists, even if you're only using the most common type. Much existing
script breakage!

My thoughts at least,

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm|I shoulda realised building the 
email: [EMAIL PROTECTED] |memory allocation subsytem with 
work tel: (0117) 954-5253  |--with-malicious-ai was a bad idea.





Re: multilingual programs

2000-03-29 Thread D. Tweed

On Wed, 29 Mar 2000, Matthias Mann wrote:

 Has anybody some experience on what's the best way to write programs that may 
 interact in multiple languages?
 
 My first thought was to extract all texts from the source and put them into a 
 big list or array. The program then accesses the list corresponding to the 
 selected language.
 
 Any other (better) ideas? Should inputs, e.g. answers to yes-no-questions, be 
 handled the same?

Not sure if this is a better idea, but the approach that, from what I can
gather, a lot of the GNU programs use is equivalent to replacing every
literal string in the text with roughly

(lookup language "the string")

where lookup::Langauge - String - String and language is (in C) a global
variable containing the target language. These strings get looked up at in
an external set of translations for various languages at runtime (ie once
for each time the string is called) and if a translation is found it's
used, otherwise the original string is used. The advantages of this seem
to be that (1) translators can provide extra translations without needing
a program recompile and (2) it's failsoft so that if there's no
translation there's still a string to use, albeit in the language the
programmer used and (3) with the _(x) macro it doesn't add `visual noise'
to the program logic. The disadvantage of this kind of scheme for haskell
is that there's no way to get a user setable global variable without
everything going monadic (or you use an unsafe operation) so it'd have to
be passed as an explicit argument to every function needing it. Presumably
with answers the user types (as opposed to GUI components with labels that
they activate) I guess you're into the realm of regular expressions rather
than strings.

I'd be interested to know if there's a more natural way to do this for
haskell.

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm|ALERT: you are communicating with an
email: [EMAIL PROTECTED] |unsavoury individual with a copy of gdb
work tel: (0117) 954-5253  |who reverse-engineered a file format.





Re: Ratio: (-1:%1) (-1:%1)?

2000-03-24 Thread D. Tweed

On Fri, 24 Mar 2000, Marc van Dongen wrote:

 Hmm. I must have missed something. My hugs (1.4) allows it.
 I was assuming that Haskell did allow it.
 As it turns out my latest ghc doesn't. That's cool.

If you haven't loaded any modules then hugs is in `module scope' of
prelude and it's possible. If you do, eg, :l List then you end up in that
module scope and it's no longer allowed.

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm|ALERT: you are communicating with an
email: [EMAIL PROTECTED] |unsavoury individual with a copy of gdb
work tel: (0117) 954-5253  |who reverse-engineered a file format.





runtime optimization of haskell

2000-03-23 Thread D. Tweed

This is just a curious thought:

happened to read
http://www.arstechnica.com/reviews/1q00/dynamo/dynamo-1.html which makes
the very interesting point that optimizingcompilers have a difficult job
given that they don't know the relative importances of various paths of
execution through the program under typical runs (and in particular the
extreme case that a particular path never happens but which isn't
statically detectable at compile time, only run-time) but that a run-time
reoptimizer can detect and take advantage of this to a useful degree. It
looks a bit beyond JIT compilation since it takes advantage of run-time
trace information and different to partial evaluation since there's no
looking at how the fixed input simplifies the high level source, only what
paths through the executable code happen frequently.

I'm just curious if there's anyone in the Haskell/FP community working on
such things? (The closest thing I'm aware of is David Lester's stuff on
throw away compilation (sorry no pointer)) It just seems that functional
languages are an area where it's particularly true that the actual
run-time patterns of execution may be a tiny subset of the `worst-case
compile time range of execution paths'. Or is it the case that it's likely
that a general binary reoptimizer will work equally well with code
compiled from all languages?

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm|ALERT: you are communicating with an
email: [EMAIL PROTECTED] |unsavoury individual with a copy of gdb
work tel: (0117) 954-5253  |who reverse-engineered a file format.





Re: runtime optimization of haskell

2000-03-23 Thread D. Tweed

On Thu, 23 Mar 2000, D. Tweed wrote:

 such things? (The closest thing I'm aware of is David Lester's stuff on
 throw away compilation (sorry no pointer)) It just seems that functional

As Julian Seward kindly mentioned to me, I meant David Wakeling.

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm|ALERT: you are communicating with an
email: [EMAIL PROTECTED] |unsavoury individual with a copy of gdb
work tel: (0117) 954-5253  |who reverse-engineered a file format.





Re: speed of compiled Haskell code.

2000-03-20 Thread D. Tweed

 "Ch. A. Herrmann" wrote:
I believe that if as much research were spent on Haskell compilation as
on C compilation, Haskell would outperform C.

Unless I've got a dramatically distorted view of the amount of research
that goes on for imperative vs functional languages, and C vs haskell it
seems that they get, to an order of magnitude, the same amount of
research. Haskell doesn't do as well as C in spite of this because
compiling a functional program to run on well on current hardware is much
harder than compiling C to run well on current hardware. From my
understanding this is because to do well for Haskell like languages
requires deducing run-time behaviour from a program that abstracts away
from it (eg figuring update analysis, fold-build
transformation,etc) whereas C programs are have much more of this done by
the person writing the program. Of course, that's why the C is likely to
be less adaptable than the Haskell, which is why all Right-Thinking
Programmers (TM) think Haskell is a better programming vehicle :-)

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm|ALERT: you are communicating with an
email: [EMAIL PROTECTED] |unsavoury individual with a copy of gdb
work tel: (0117) 954-5253  |who reverse-engineered a file format.




Re: HaskellDoc?

2000-03-14 Thread D. Tweed

On Tue, 14 Mar 2000, George Russell wrote:

 Frank Atanassow wrote:
  What do you all think?
 Well I suppose that includes me, but I'm a bit confused.  I've looked at some of
 the .lhs files containing the source of GHC, but the so-called literate nature
 of the code doesn't seem to me to make it any better.  Specifically,
 it doesn't do anything that comment characters can't do.  So could someone
 explain what exactly literate programming for Haskell is intended to achieve?
 The only thing I really miss in .hs files which is done by Knuth's Web
 (in "TeX the program") are indices for each module indicating where the values
 and types it uses come from, and where the values and types it defines are used.

The other big advantage of literate programming for imperative languages
like Pascal  C is related to that because of the difficulties in passing
variables in and out of functions, and creating the equivalent of
lambda's, functions tend to get quite big without a hierarchical
decomposition. By allowing the code 57 macros it allows a hierarchical
decomposition for exposition/maintenance purposes without the need to do
it via functions in the code and hence write zillions of lines of
`marshalling-type code'. This big advantage seems to be negligible for
Haskell since higher order functions and let/where clauses handle this as
well as macros, and really rearranging the order (like Knuth does at
several points) makes it tricky to preserve the indentation required by
the layout rule.

As much of a fan as I am of literate programming (I even bought a copy of
METAFONT: the program!) I'm not sure anything more advanced than the
Bird-tracks notation really makes much sense for Haskell.

 Is the idea that the library documentation should be identical with the code?
 Because I don't want this - I don't want to look at code, however nattily presented,
 when all I want to know is how to call Posix.select (say).  Imagine if the TeXbook
 didn't exist and all we had were "TeX - the program"; I think it is clear that
 people who find the TeXbook hard going would have even more trouble with "TeX - the
 program".

Documentation is a vague term: certainly it'd be undesirable for a
specification to the libraries to just a literate copy of the code
itself. But if you're thinking in terms of an open source project where
people fix bugs in the libraries then libraries that contain some
explanation would be useful (as an additional item to the library spec). I
don't know if this is just my personal experience but the hardest part of
debugging somebody else's code is to figure out what they believed it was
doing when the wrote it (essentially mentally extending the
specification from the external behaviour of the function to it's
implementation strategy). Here even partial commentary might make
sense.

Just some random thoughts,

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm|ALERT: you are communicating with an
email: [EMAIL PROTECTED] |unsavoury individual with a copy of gdb
work tel: (0117) 954-5253  |who reverse-engineered a file format.




Re: HaskellDoc?

2000-03-14 Thread D. Tweed

On Tue, 14 Mar 2000, George Russell wrote:

 "D. Tweed" wrote:
  Documentation is a vague term: certainly it'd be undesirable for a
  specification to the libraries to just a literate copy of the code
  itself. But if you're thinking in terms of an open source project where
  people fix bugs in the libraries then libraries that contain some
  explanation would be useful (as an additional item to the library spec).
 Couldn't agree more.  Every module should be commented, and every exported
 symbol (and lots of the internal ones) should also be commented.  But I don't
 think you need anything more than comment characters for this.

Well I think of, for example, Bird-tracks as an weird comment notation
where you mark the bits that aren't comments. The point is that there are
various uses for comments in code which it's helpful to be able to
distinguish by machine analysis:

* Long expository comments that can be displayed as paragraphs of natural
language
* Inline comments that should be displayed as small snippets within the
code
* Comments that actually contain meta-program information, eg pragmas
* Comments that have been used to remove sections of code temporarily

I think that things like JavaDoc, POD and python doc-strings can be viewed
as simple literate programming techniques or as comment conventions that
are machine analysable depending on your preference. Certainly I wouldn't
want to use {- -} braces for literate (ie long) comments without some
convention for auto-distinguishing them from other uses of {- -}.

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm|ALERT: you are communicating with an
email: [EMAIL PROTECTED] |unsavoury individual with a copy of gdb
work tel: (0117) 954-5253  |who reverse-engineered a file format.




Re: HaskellDoc?

2000-03-14 Thread D. Tweed

On Tue, 14 Mar 2000, George Russell wrote:

 "D. Tweed" wrote:
  * Comments that actually contain meta-program information, eg pragmas
 The Haskell standard system for putting information for the compiler in
 things which look like comments is obnoxious, but fortunately not _very_
 common.  I don't think it justifies adding yet another comment notation for
 Haskell.   If you compile with unlit and with cpp, like a lot
 of GHC, there must be at least 4 ways of distinguishing information for
 humans from information for the compiler, which seems to me excessive.
 Yet again I feel that the language would be improved by taking things out
 rather than adding more things.  In my opinion you could get by very well
 in Haskell with just "--".

I'm with you on the pragmas in comments. However, I think we're perhaps
talking at cross purposes: I was thinking more along the lines of
conventions that would allow useful tools to be written. Having them
widely accepted means that it's actually worth writing the tools and
because it's just a convention you don't have to use them, the only caveat
being that if you don't use them you obviously can't expect to be able to
use any tools that happen do be written that _do_ need them (e.g., auto
find english description of function in an IDE, etc). If you were to
change to saying `as far as Haskell itself goes, comments occur within {-
-} braces' then it might be adopted that {- !! denotes a long comment,
{- ! denotes a short comment (in the sense that it should be displayed
inline),  {- ? for an assertion (which may not be computable, hence why it
isn't done an Haskell code) which could be extracted by a
program-correctness prover, etc, with just plain {- being `unknown but
probably just haskell code that's currently commented out'. I find I do a
lot of stuff in my (bigger) C++ projects involving various notations
within comments that perl scripts can then use to do various kinds of
things (eg, I automatically generate most of my header files, marking
exported prototypes with /*exported*/). What I'm basically saying is that
I agree having the compiler know about different kinds of comments is
unnecessary work, but just having one amorphous comment class seems to
rule out lots of useful stuff.

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm|ALERT: you are communicating with an
email: [EMAIL PROTECTED] |unsavoury individual with a copy of gdb
work tel: (0117) 954-5253  |who reverse-engineered a file format.




Re: HaskellDoc?

2000-03-14 Thread D. Tweed

On Tue, 14 Mar 2000, George Russell wrote:

 In any case, in the original example
 Who the author is, and what the version is, would be better handled by
 CVS or some similar system.  The "Name" is redundant; if it doesn't match
 the filename, we have total chaos anyway.  The License is a drag; I suspect
 it could in fact be dispensed with in most though sadly not all of the civilised
 world, where the presence of a "LICENSE" file in all distributions, or indeed 
 nothing at all, will secure automatic legal copyright protection.  Of course 
 if your lawyers insist I suppose it has to be there . . .  Tested and
 Restrictions would be better handled by referring to a test case file 
 somewhere (with the output it's supposed to produce!), then I can see exactly 
 (A) what is being tested; (B) whether my new whizz-bang Microsoft Visual 
 Haskell compiler can pass these tests.
 
 So my reformed comment would be
 {- Top-level module for HTML combinator library.
See also test cases in tests/HTML.* and user-level documentation in 
   docs/html/HTML.html
Other guff, EG places where the code could be speeded up or improved
License, if the lawyers say we've absolutely got to have it here
-}
 Of course the user-level documentation does not let the author out of putting
 decent comments in the code as well.

I think one of the major motivation for putting stuff that can be put into
convenient xml form in the files is that with the aid of only the dtd
users can use xml tools to do various machine-aided operations (eg,
finding which bits to read, etc) without worrying about what platform or
tools they're working with. Keeping the name  version in CVS raises the
issue: well what if I don't want to use CVS but rather
RCS/PRCS/bitkeeper/DavesOneOfPerlRevisionControlSystem for some Haskell
files? If everyone's gonna be interoperable then the version control
string that's put into the file has to have a common format, so why not go
for xml. Similarly putting pointers to documents, etc, into a appropriate
entries means that tools can read them but also humans can also read
them. I'm sure it doesn't take more for a human reader to figure out what
`user-docdocs/html/HTML.html/user-doc' means than what `user docs in
docs/html/HTML.html' means.

I know there's a lot of hype about XML  people intersted in dealing with
the bleeding edge, but it does seem to be the most promising way forward
for tackling the problem that the my time is limited and the more I can
quickly, easily  ad-hoc automate the better things will be. I think the
only problems with example intro was that (as is to be expected from a
first attempt) it deals with what the author thinks is important rather
than what reader/users would think is important. Hopefully feedback will
improve the eventual definition of the dtd.

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm|ALERT: you are communicating with an
email: [EMAIL PROTECTED] |unsavoury individual with a copy of gdb
work tel: (0117) 954-5253  |who reverse-engineered a file format.





Re: Typo in Haskell 98 Random library

2000-02-02 Thread D. Tweed

On Wed, 2 Feb 2000, Koen Claessen wrote:

  gen1
 /\
  gen1gen2  -- once
 /   ||   \
   gen1 gen2 gen2 gen3  -- twice
 
 In fact, they will produce the *same* generator "gen2" on
 both sides, which will create an undesired dependency
 between the two recursive calls.
 
 This "bug" in Hugs is fixed in the newest version, but it
 was not really a bug in Hugs, but a bug in the report. The
 report should say: "two *independent* generators".
 
 The definition of independent might be: "two generators
 gen1 and gen2 are independent if no "split"-path starting at
 gen1 intersects with a "split"-path starting at gen2".

This might be quite difficult to prove/implement (or maybe not, I haven't
thought about it in real depth). Maybe the solution is to have an extra,
hidden part of the generator state that determines the splitting, so that
although two generators may be equal in the sense that repeatedly calling
next on them produces the same infinite stream of values, they will likely
produce different pairs of generators when they split. (Of course the
hidden state is still behaving deterministically (using a pseudo-random
number generator technique to evolve the splitting state) so that it can
still be forced to be reproducible for debugging purposes.)

This would mean that `independent' could be defined perhaps more tractably
in terms of a condition that a binary tree produced by repeated splitting
has negligible probability of any correlations between the generators in
any subset of the nodes. (The diagram of Koen's shows that it's as much
the fact that having made a non-independent split, the pattern then
repeats itself ad infinitum as the fact that a non-independent split is
made that's causing the problem.)

(One thing I haven't addressed is if it's feasible to have a large enough
internal splitting state that the node subset correlation probability is
very small, and yet be efficiently implementable for splitting the
generators being used at the moment.)

CAVEAT: this is a quick thought which may well be flawed...

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm Farenheit 451 is the temperature at  
email: [EMAIL PROTECTED] which paper burns. Let's do an experiment to
work tel: (0117) 954-5253 see what temperature melts human brain cells.




Re: A hard core game programmers view

2000-01-27 Thread D. Tweed

[Hopefully not off-topic wrt Haskell]

On Thu, 27 Jan 2000 [EMAIL PROTECTED] wrote:

  Look at the popularity of PERL
  for example.  That is one thing I will never understand.
 I'm sure I will get flamed to a crisp for this, but...
 I think PERL can be quite nice when you want a quick hack that will do
 something useful.

Another reason for the popularity of Perl is that it's _popular_ 
_ubiquitous_. Although I like Haskell and some other languages (e.g.,
Mathematica, python, even C++) more than Perl, when I want to produce
something that I hope other people in my lab will use/contribute fixes to,
Perl's what I use (with much swearing  looking up in the manual). There's
only a few machines with a Haskell system available on them, whereas
everything has a Perl interpreter installed, and few people who are
comfortable with the language.

Of course, what can be done to help the start an epidemic
`infecting' people  machines with Haskell I don't know...

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm Farenheit 451 is the temperature at  
email: [EMAIL PROTECTED] which paper burns. Let's do an experiment to
work tel: (0117) 954-5253 see what temperature melts human brain cells.




Re: drop take [was: fixing typos in Haskell-98]

2000-01-25 Thread D. Tweed

On Tue, 25 Jan 2000, Chris Okasaki wrote:

  I'm with the option (B): negatives are just outside 
  the domain of takedrop, and should give you an error 
  message.
 
 For the people that share this sentiment, can you please
 explain why ints that are too big should not similarly
 give an error?  I can see both being ok, or both being
 errors.  I just don't see why one should be ok and the
 other an error.

As a purely _pragmatic_ argument: code that does things by taking blocks
of stuff from a list (e.g., some form of block based compression
technique) could be written (in broad outline)

f [] = []
f a xs =res:f a' zs
(ys,zs)=splitAt 40 xs
(a',res)=doStuff a xs

If the list isn't a multiple of 40 then only doStuff needs to know how to
deal with incomplete blocks with B; with values too big being an error f
has to check at each point whether there's enough list there before trying
the splitAt. So you have a way of ascertaining that length  40 without
diverging on infinite lists. It all gets complicated, and this pattern
of `eat fixed size chunks while the list isn't depleted' seems common
enough to warrant simple programming.

I can't think of a pattern of usage that's natural that
leads to wanting to take negative portions of the list, but maybe that's
my imagination...

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm Farenheit 451 is the temperature at  
email: [EMAIL PROTECTED] which paper burns. Let's do an experiment to
work tel: (0117) 954-5253 see what temperature melts human brain cells.




Re: drop take [was: fixing typos in Haskell-98]

2000-01-25 Thread D. Tweed

On Tue, 25 Jan 2000, D. Tweed wrote:

Oops, fixing two thinko's

 f _ [] = []
 f a xs =res:f a' zs
 (ys,zs)=splitAt 40 xs
 (a',res)=doStuff a ys

(My haskell coding is getting worse than my C++, which I didn't believe
possible...)

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm Farenheit 451 is the temperature at  
email: [EMAIL PROTECTED] which paper burns. Let's do an experiment to
work tel: (0117) 954-5253 see what temperature melts human brain cells.





Re: symbolic indeterminates

1999-11-28 Thread D. Tweed

On Sun, 28 Nov 1999, S.D.Mechveliani wrote:

 Is there any problem?
 Introduce the program variables  x,y... and bound them to the symbolic 
 indeterminates.
 For example, in DoCon program, it is arranged about like this:
 
 let  { s = cToPol ["x","y"] 1;  [x,y] = varPs s }
 in 
 x^2*(x - x*y) ...
 
 Here  cToPol, varPs  are the "standard" functions.
 cToPol   creates a sample polynomial  s  in needed indeterminates.
 varPs f  extracts indeterminates from  f  and converts them to the 
  domain of  f.
 `let'describes the algebraic domain - once.
 Hence, in many computations after `in',  x,y  denote what is needed.

This sounds fascinating: is this approach powerful enough that if I have
a definition of a haskell function purely as a definition for actual
values and then apply it to an indeterminate I automatically get the
expected result (no evaluation)? (ie,can I write

harmonic n = (sum.map (\x-1/x)) [1..n]
result=let  { s = cToPol ["x"];[x] = varPs s }
   in harmonic x

to get result = harmonic(x)?) This would be very useful if you could
write do such things (I presume that you can also do the equivalent of
evaluating an expression in an indeterminate for a given value of the
indeterminate in this framework) since that (as far as I can see) would
let you alter existing programs so that they partially evaluate themselves
as they run without the need to alter any of the worker functions. (In
spare time a couple of years ago I pottered about trying to do such a
thing for trivial image processing type tasks but couldn't see any way to
do it in Haskell and so cobbled together an interpreter (written in
Haskell :-) ) with roughly mathematica type semantics. Working in native
Haskell would have been _much_ less work.)

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm Farenheit 451 is the temperature at  
email: [EMAIL PROTECTED] which paper burns. Let's do an experiment to
work tel: (0117) 954-5253 see what temperature melts human brain cells.




Re: variables, indeterminates. Reply to reply.

1999-11-28 Thread D. Tweed

On Sun, 28 Nov 1999, S.D.Mechveliani wrote:

 DoCon provides the standard functions  
   cToPol  "coefficient to polynomial",
   varPs   "variables as polynomials".
 In other algebra systems, they are easy to program too - as soon as 
 the polynomial representation method is chosen.
 How this all relates to your question?

I think I misunderstood what you were saying with respect to why these
were like Maple indeterminate variables. There's the simple way in which
maple  mathematica treat indeterminates at the system level, namely
applying a function to an indeterminate which doesn't have a pattern
specifying its value for an indeterminate evaluates to itself,
i.e.,log(x)=log(x). Then there's the more complicated sense, that your
sophisticated stuff seems to deal with, where rules are given for
indeterminates and combinations of indeterminates corresponding to
mathematical `objects', eg, polynomials, power series, etc. I _think_ it
was primarily the simple sense that Jerzy was talking about; it's nice
that the system knows how to deal with indeterminates by default since
then functions written purely for arguments with concrete values
produce sensible results automatically. (This isn't just useful for
classical mathematics;  I know someone doing a PhD in optimizing compilers
who uses mathematica where indeterminates are unknown program fragments
and `algebraic simplification rules' are program optimizations.)

As this is drifting off-topic shall we take this discussion offline?

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm Farenheit 451 is the temperature at  
email: [EMAIL PROTECTED] which paper burns. Let's do an experiment to
work tel: (0117) 954-5253 see what temperature melts human brain cells.




Re: Scientific uses of Haskell?

1999-11-26 Thread D. Tweed

On Fri, 26 Nov 1999, Jerzy Karczmarczuk wrote:

 Do you know what makes Maple so attractive for newbies, for teachers,
 etc? One of the reasons is simply scandalous, awful, unbelievably
 silly : the lack of distinction between a symbolic indeterminate,
 and the program variable. You write  ... f(x) ... and if x has not
 been previously assigned, you get 'x'. Computer algebra packages are
 - from the programming discipline perspective - monsters. You won't
 try to do things like that, in Haskell, ML, or anything reasonable.

The other problem with trying to implement a computer algebra library in
haskell (as opposed to writing a CA system in haskell) is that you need
the ability to scrutinise the `intermediate expressions' for
possible pattern matchinig rather than just the ultimate value. (Eg

sqr x = x*x
log (sqr x) = 2*log x
log x = PrimLog x

would still be valid even if hugs were modified so that sub-expressions
applying functions to indeterminates stopped the evaluation process once
an attempt was made to evaluate that subexpression and returned a `human
readable expression'. Modifying haskell to do this would slow `non
intermediate-expression matching' functional programs down so much (I
suspect) that I'm inclined to say that Haskell isn't suitable for doing
computer algebra natively, and that this is a reasonable design decision. 

I'm much more hopeful that functional programming languages will be useful
for prototyping numerical/scientific applications (is that what you mean
when you say scripting Eduardo?).

(I'm currently setting aside the first morning of my christmas break to
try and make comprehensible to someone other than me the numerically
intense algorithms in C vs Haskell that I mentioned ages ago on a thread
entitled "Cryptarithm test" (I think). Urgent need to get my Phd stuff
working has unfortunately prevented me from doing this so far.)

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm Farenheit 451 is the temperature at  
email: [EMAIL PROTECTED] which paper burns. Let's do an experiment to
work tel: (0117) 954-5253 see what temperature melts human brain cells.





Re: your mail

1999-11-25 Thread D. Tweed

On Thu, 25 Nov 1999, Eduardo Costa wrote:

 course. Since I am not able to program in languages like C or 
 Oberon, I would like to have a practical lazy functional compiler
 (or a practical prolog compiler). I hope to convince people to implement
 such a compiler.

I think the compiler that you want _almost_ exists: doesn't nhc98 meet
everything except the `speed of compiled code close to unoptimized C'? The
problem seems (to someone with only an informal knowledge of how these
things are implemented) is that the curve of performance versus
{complication of analyses for compilation,code size, implementation
effort} for a functional language running on stock cpus is pretty
lograithmic, so you need something within an order of ghc's size to
compile as well as ghc does. If this is inaccurate then please correct me. 

___cheers,_dave
www.cs.bris.ac.uk/~tweed/pi.htm Farenheit 451 is the temperature at  
email: [EMAIL PROTECTED] which paper burns. Let's do an experiment to
work tel: (0117) 954-5253 see what temperature melts human brain cells.




Re: more on Cryptarithm test

1999-09-27 Thread D. Tweed

On Tue, 28 Sep 1999, Fergus Henderson wrote:

  Personally I'd
  always write the above, not so much for performance reasons as the fact
  that if the objects in the vector have a shallow copy constructor
  (generated automatically  silently)  but a destructor that deallocates
  resources you've got an awful mess to debug when it crashes after leaving
  the function; consequently I do this even when it isn't strictly
  necessary. The few other C++ programmers I know do the same thing so
  it's probably reasonable to assume everyone does. 
 
 That's not a reasonable assumption.  If you have a class which has a
 shallow copy constructor but a destructor that deallocates resources,
 then you're already in deep trouble.  Passing by const reference in
 such a case will at best only stave off the inevitable disaster.
 
 Your conclusion is correct, in this case, but the motivation should
 be performance, not defending against buggy code with mismatched
 destructors and copy-constructors.

wearing image processing researchers hat
staving off disaster might make things not fall over long enough for me to
get some results out when I'm using buggy classes that I've written (very
rare: a couple of long debug sessions have made me paranoid about this) or
buggy classes/plain C structs that come from someone elses code.

I entirely agree this isn't suitable for misssion critical/long liftime
code, but my supervisor's next student can sue me for writing
non-maintainable code if he can find me :-) 
/wearing image processing researchers hat

Anyway, my reason for digressing like that in the original mail was to
suggest that it wasn't necessarily an obscure performance improvement
technique but something I often need anyway to get correct code.

___cheers,_dave__
email: [EMAIL PROTECTED]   "He'd stay up all night inventing an
www.cs.bris.ac.uk/~tweed/pi.htm   alarm clock to ensure he woke early
work tel: (0117) 954-5253 the next morning"-- Terry Pratchett







Re: more on Cryptarithm test

1999-09-27 Thread D. Tweed

On Mon, 27 Sep 1999, S.D.Mechveliani wrote:

 Now it shows the ratio  * 6 *.

[snip]
 
 But this mess with platforms and versions, is not, probably, so 
 important, because people can compile and run this program in their
 own environments - and correct the performance result.
 
 What do you think of it?

One small comment is that in your functions condition1  condition2 I
think most C++ programmers would say that you want to write

int condition1 (const vectorlong x)
 
since otherwise the compiler generally has to obey the normal function
call semantics and create a copy of the vector when it passes it the
function, rather than work directly with the existing list. Personally I'd
always write the above, not so much for performance reasons as the fact
that if the objects in the vector have a shallow copy constructor
(generated automatically  silently)  but a destructor that deallocates
resources you've got an awful mess to debug when it crashes after leaving
the function; consequently I do this even when it isn't strictly
necessary. The few other C++ programmers I know do the same thing so
it's probably reasonable to assume everyone does. 

An informal test on my machine reveals that this copying has a significant
effect even for such short vectors (51.5 odd user seconds for orig
version, 19.3 odd user seconds for new version, sys time 0.5 s in each
case but bear in mind no other special precautions taken, compiled on an
SGI O2 using SGI CC with no optimization) Unfortunately I don't have ghc
compiled on my system, so I'll leave the detailled comparison to others. 

 -- C++ -
[snip]
 int condition1 (vectorlong x)
 {int i = 0;
  while  (i  10x[i]  10)  i++;
  return (i  9);
 }
 int condition2 (vectorlong x)
 {int i = 0;
  while  ( i  20x[i]==9-i )  i++;
  return (i  9);
 }

PS: I was one of the people who `criticised' Haskell for intensive
calculations and was asked to back this up. I'm working on getting a C++
and Haskell program doing the same thing that people in the know can
compare, but because what I'm arguing is that it's programs with
significant differences between `mathematical algorithm formulation' and
`computer language algorithm formulation' which would benefit most, the
examples are necessarily not tiny code fragments. I'll try and post
something later in the week but there are very pressing demands on my time
at the moment.

 ___cheers,_dave__
email: [EMAIL PROTECTED]   "He'd stay up all night inventing an
www.cs.bris.ac.uk/~tweed/pi.htm   alarm clock to ensure he woke early
work tel: (0117) 954-5253 the next morning"-- Terry Pratchett








Re: What is a functional language? (Was: Re: Functional languages and ... (was: Cryptarithm solver ...))

1999-09-22 Thread D. Tweed

On Wed, 22 Sep 1999, Antti-Juhani Kaijanaho wrote:

 On Wed, Sep 22, 1999 at 02:53:03PM +0100, Claus Reinke wrote:
  Functional programming, i.e., programming with functions, is possible in
  languages that do not support all features that have become common in
  many functional languages.
 [eg. higher-order functions]
 
 Well then, it appears that I have a mistaken idea of what functional
 programming is.  Can you give me, to cure my ignorance, a few examples
 of languages (preferably ones that are in use nowadays) that are *not*
 functional and the reasons why this is so.  Is C functional, since it
 is possible to program with functions in it?

Firstly let me check that we mean the same thing by _higher order
functions, namely they are functions which return functions. This is
different from the idea that functions are _pure_, namely that the value
returned by a function depends _only_ on its arguments (and not on some
state, as represented by either local `static' variables or global
varables in C). To my understanding, most people would call a language
functional if the natural way of using it leads to a very large percentage
of the computation being done with pure functions, and it's not if it's
natural to use a significant percentage of comptation which involves
state, either locally or globally. (Note that repeated assignment is
clearly stateful, so any language where this is a natural way 
of doing things is not functional) So there's no sharp
dividing line :-S

Haskell is functional, although it has facilities for dealing with
state via monads.
ML is also functional language, even though it has references which allow
it to deal with state.

Emacs-Lisp is not really a functional language because so much of it
involves manipulating state; however it incorporates some of the ideas
from functional programming as it existed in the late seventies.

C is not functional because the most natural way of coding many algorithms
involves using subroutines (misleadingly named functions!) with either
internal state (even if this is only assignment within loops!) or global
state. Neither are Pascal, Modula-x, Oberon, perl,... This is despite the
fact that you could write C/... programs that made sure that the
subroutines were all functions and that you only ever used single
assignment because it's not natural to do it that way.

(In some ways it's like asking `what characterises an alcoholic?': you
need to note that some people who aren't occasionally drink to excess and
that alcoholics can go without a drink for short periods.)

___cheers,_dave__
email: [EMAIL PROTECTED]   "He'd stay up all night inventing an
www.cs.bris.ac.uk/~tweed/pi.htm   alarm clock to ensure he woke early
work tel: (0117) 954-5253 the next morning"-- Terry Pratchett






Re: Cryptarithm solver - Haskell vs. C++

1999-09-21 Thread D. Tweed

On 21 Sep 1999, Marcin 'Qrczak' Kowalczyk wrote:

 Sat, 18 Sep 1999 00:06:37 +0200 (MET DST), Juergen Pfitzenmaier 
[EMAIL PROTECTED] pisze:
 
  I dont't care very much how fast a program runs. I care about how
  long it takes me to write it. If you take a programming task of
  reasonable complexity you will finish *months* earlier using a
  --good-- functional language instead of C++.
  
  Use a functional language and buy a faster computer with the saved
  money/time.
 
 I have to care how fast my programs run. I like writing in Haskell
 very much, it's my favorite general-purpose language, but one of the
 biggest weak points of Haskell for me is poor efficiency (at least
 with ghc, I don't know how fast are other compilers). I wonder whether
 this is the issue of Haskell itself or the compilers, I wonder if I
 can expect things to go better in future.

Hear, hear. Sometimes the problem that you're working on requires such
a lot of computation (e.g., typical image processing stuff) that no
savings from reduced writing time can by a machine fast enough to
compensate for the slow-down. And if you're in research where you need
to scrutinise the results of `prototypes' (this isn't quite the word
because it implies that once you've produced it you've ironed out almost
all the problems  issues and are ready to build the `final system') to
figure out why the algorithm isn't working on real data and how to improve
it.

I'm quite open to arguments that maybe you can't
make a functional language that's as nice as Haskell (features like lazy
evaluation, nice type classes, etc) that's also able to `really hammer
until it's totally flat' functional code that implements heavily numerical
algorithms. However I'd argue that this is still an area where a
functional language which was standard, widely used and compilable into
very very efficient code would be of great benefit to many people who work
on simulation, image processing problems, control systems, etc. (So I'm
happy for the Haskell people to say `We can't make
computation intensive programming better because it'd screw up stuff we do
well now' but not that `development time shrinkage compensates for
run-time growth'.)

 I would be much more happy if Haskell could be compiled to a more
 efficient code. "Only 10 times slower than C" may be unacceptable.
 Sometimes the speed doesn't matter, sometimes it does. It's not fun
 to use a worse language only because a better one is too slow.

The thing that's more annoying is that there seems to be no _standard_
ways of indicating that a certain small patch of the code is the
metaphorical `inner loop' and that the compiler should try anything and
everything, probably including exhaustive search of exponential search
spaces, to generate good code for it. I know that you can twiddle the
optimisation with pragmas in ghc files, but that's ad-hoc and doesn't
really make the fanatically insane optimization steps that generating
efficient computation-intensive code from declarative languages seems to
demand.

___cheers,_dave__
email: [EMAIL PROTECTED]   "He'd stay up all night inventing an
www.cs.bris.ac.uk/~tweed/pi.htm   alarm clock to ensure he woke early
work tel: (0117) 954-5253 the next morning"-- Terry Pratchett







Re: Haskell Wish list: library documentation

1999-09-08 Thread D. Tweed

On Wed, 8 Sep 1999, S. Alexander Jacobson wrote:

 Are we talking about documentation for the H98 libraries?
 Are these libraries relevant?  Don't MPTC, Existential Types, Restricted
 Type Synonyms, Arrows, and an FFI substantial change the architecture,
 interface, and implementation of the libraries?  As these language
 features are becoming more accepted (implemented in GHC  Hugs), is it
 worth investing time in supporting what are in fact really strange library
 APIs.  

For me at least there's an 95% of the scripts that I use Haskell for (data
analysis, testing toy models, general prototyping of algorithms) don't
involve any of the above, and unless I'm missing things there's no way
that using them would improve matters. So I'd add my support (though not
at the moment my time unfortunately) to documenting all the haskell
libraries that contain non-controversial classic functional programming
stuff, eg, List, Monad, etc. I'm reasonably frequently in the situation
where I think `There's probably a standard function that I can use to do
this, but it'll probably be marginally quicker to write my own than hunt
it down'. 

Of course I'm aware that I use Haskell for completely different purposes
to people like Alex, and see his point that some of the `interacting with 
the outside world' libraries may be superseded (in a de facto sense)
soon.

___cheers,_dave__
email: [EMAIL PROTECTED]   "He'd stay up all night inventing an
www.cs.bris.ac.uk/~tweed/pi.htm   alarm clock to ensure he woke early
work tel: (0117) 954-5253 the next morning"-- Terry Pratchett






Re: Q: hugs behavior...

1999-08-25 Thread D. Tweed

On 25 Aug 1999, Marko Schuetz wrote:

 What I would like to know is: wouldn't it make sense to have the
 transformation
 
 f x = e where e does not mention x
 
 --
 
 f x = f'
 f' = e
 
 in hugs? Did I miss anything?

What if e if huge (maybe an infinte list of primes) and f x is used only
very rarely during the evaluation? Doesn't this force as much of f' as has
ever been evaluated due to functions using f x to be constantly in
memory, or is that a wrong reading of the situation? 

___cheers,_dave__
email: [EMAIL PROTECTED]   "He'd stay up all night inventing an
www.cs.bris.ac.uk/~tweed/pi.htm   alarm clock to ensure he woke early
work tel: (0117) 954-5253 the next morning"-- Terry Pratchett






RE: Question

1999-08-20 Thread D. Tweed

Warning: comments based on mailing list/internet obesrvations which may
be more representative of what people say than what they do.

On Thu, 19 Aug 1999, Mark P Jones wrote:

 Hi Alex,
 
 | Out of curiosity, how big is the user community?  How many downloads of
 | the software?  How many are on this list?
 
 I don't know the answers to any of these, but I think you're implying
 "very small", and I'm sure you're right.  Perhaps you're also suggesting
 that our community is too small for this kind of development model, and
 again that may well be true.  What does this say about the future of
 Haskell?

The other question to consider is the `sociological makeup' of the user
base. Looking at most people who hack on Free Software projects they are
either people doing undergrad computer science degrees or who work in
computers after finishing/dropping out part way through computer science
degrees. I'm tempted to suggest that as a consequence this means that
they've got a reasonable background in computers and a desire to do
something interesting (those that are working tend to imply their jobs
bring in money but aren't terribly fascinating).

The impression I get of the Haskell community is that it's made up to
quite a large extent of either post-grad students or people with Phd's
working in academia. (I know there are people like Alex who use Haskell in
their work but I get the impression there's far fewer of them than of the
other two.) In both those cases I could argue that these people have jobs
they find interesting and are relatively un-inclined to spend long
evenings working on code outside their direct work `for the fun of it'
(certainly applies to me). As partial support for this I'd mention that
the only academic I can think of who I know has done some work on
`mainstream' free software without (as far as I know) it being done as
part of one of their research programmes in Lennart Augustsson. 

Are there any undergrads/just graduated  got a job people out there on
the list who could give a first-hand account of their opinions about free
software and their attitudes to whether they'd prefer to be working on
something mainstream like hacking GNOME as opposed to doing something
`cool but hidden under the hood' on a Haskell implementation? These people
seem to be the sort of people who are the `workhorses' of the free
software community in general.

___cheers,_dave__
email: [EMAIL PROTECTED]   "He'd stay up all night inventing an
www.cs.bris.ac.uk/~tweed/pi.htm   alarm clock to ensure he woke early
work tel: (0117) 954-5253 the next morning"-- Terry Pratchett






Re: syntax

1999-08-19 Thread D. Tweed

On Fri, 20 Aug 1999, Bob Howard wrote:

 data Tree a = Leaf a | Branch (Tree a) (Tree a)
 Branch :: Tree a - Tree a - Tree a
 Leaf :: a - Tree a
 
 Im just learning haskell and I cant seem to figure out what is wrong with the above 
code.
 Im using Hugs98 as in interperator (sp) and I keep getting the following Error when 
I read the file.
 
 ERROR "Tree.hs" (line 2): Syntax error in declaration (unexpected `::')

In haskell there's a policy that functions and bindings (the x in f x
=...) must begin with a lowercase letter whilst type names (Tree) and
constructors (Branch) must begin with capital letters.

Your Branch there is a function and hence needs to be called branch.

(Although the error message is bewildering if you don't know what's going
on, the :: is the first place an error is detected because it's also legal
in Haskell to define infix operators, so your line 2 is initially
interpreted as the start of something like

Branch x +o+ Branch y = 

)

___cheers,_dave__
email: [EMAIL PROTECTED]   "He'd stay up all night inventing an
www.cs.bris.ac.uk/~tweed/pi.htm   alarm clock to ensure he woke early
work tel: (0117) 954-5253 the next morning"-- Terry Pratchett






Re: Haskell for numerical analysis ?

1999-08-13 Thread D. Tweed

On Fri, 13 Aug 1999, Rene Grognard wrote:

 My question is therefore: is Haskell at all suitable for complex numerical
 applications ?

_In my opinion_, Haskell is suitable for numerical programming if you
don't need performance close to C (because your problems are small say and
you're prototyping) or the numerical portions of your algorithms are
sufficiently stable (e.g., root finding, SVD decomposition, etc) that you
can code them once in C and then call them from Haskell code that does
`the interesting part of the algorithm'. However it's not (yet?) suitable
for writing numerical algorithms where the performance needs to be close
to C simply to be feasible (e.g., solving large MRF models, etc). The
`yet' comes from the observation that (AFAIK) there's no fundamental
reason why a such algorithms couldn't be programmed in a functional way 
compiled to something close to C since the patterns of computation are in
many ways much simpler than in less numerical algorithms. Of course
detecting these patterns at compile time is much tougher than it looks at
first glance. 

 Is there even any interest for such applications in the Haskell community ?

Well... here's a few indincations (sorry no URLs) that there is interest
in programming numerical algorithms in Haskell. 

(1) I'm interested in (semi-)numerical algorithms in Haskell, but it
doesn't have the performance (yet?) to program the numerical bits in
Haskell. Unfortunately the numerical bits are, for my application, the
difficult  rapidly changing, bit so writing them in C and then calling
that from Haskell means I wouldn't get any `programmer efficiency' 
benefit.

(2) Jerzy Karczmarczuk (spelling from memory) has written a renderer in
Haskell (as did one of John Hughes students at Chalmers, but that looks
to be written in non-standard Haskell). John O'Donnell was looking at
expressing rendering algorithms in Haskell as well I think.

(3) Jan Skibinski has written some packages doing some linear algebra
stuff, and seems a strong proponent of making Haskell better equipped for
such problems with standard libraries. 

The noticeable thing though is that I don't think any of these people work
on numerical algorithms in Haskell `full time', as it were.

 If the answers are yes, are there books or on-line tutorials giving
 non-trivial examples of such uses ? ... libraries ?

Not that I know of.

Hope this helps,

___cheers,_dave__
email: [EMAIL PROTECTED]   "He'd stay up all night inventing an
www.cs.bris.ac.uk/~tweed/pi.htm   alarm clock to ensure he woke early
work tel: (0117) 954-5253 the next morning"-- Terry Pratchett






Re: Is their a *good* online tutorial and reference for Haskell?

1999-08-11 Thread D. Tweed

On Wed, 11 Aug 1999, Rob MacAulay wrote:

 Thanks for the info. However, I think these are only useful if one 
 has the original TeX source. If one only has the translated 
 postscript, the fontas are embedded (so Acrobat Reader tells me..) 
 as type 3 fonts.
 
 I found a link to something called FixFont which might be able to 
 fix this, but I have not tried it out:
 
 http://www.pdfzone.com/products/software/tool_FixFont.html

I'd be surprised if you can do any sensible conversions with (La)TeX
native fonts to none-native ones because they contain ligatures (i.e.,
specially crafted single blocks for letter combinations like `ff', `ffi',
`ffl', ..., `fj' (an incredibly rare one :-) )) which all the common
postscript fonts that I've seen treat as just single letters bunched
together. (This also means that software like pstotext that tries to
convert post-script to ASCII loses these compounds when applied to
documents using TeX fonts.) I suspect making the original latex source
available is a much better idea. 

___cheers,_dave__
email: [EMAIL PROTECTED]   "He'd stay up all night inventing an
www.cs.bris.ac.uk/~tweed/pi.htm   alarm clock to ensure he woke early
work tel: (0117) 954-5253 the next morning"-- Terry Pratchett






Re: Again: Referential Equality

1999-07-28 Thread D. Tweed

On Wed, 28 Jul 1999, Hans Aberg wrote:

 At 14:02 +0100 1999/07/28, D. Tweed wrote:
  As for a math description of references, one could take the view that one
  always constructs objects a, with references r. Then what is indicated in
  the language is often the object pairs (a, r) modulo the equivalence
  relation that the references differ.
 
 I'm not sure this is a useful view to take given Lennart  Fergus's
 responses to my previous post saying that the actual references
 corresponding to named values in a compiled program can fluctuate over the
 course of program evaluation for various reasons. (I must admit I was
 surprised that this happens but I guess that's because the FL
 implementations in textbooks aren't that close to Power-User Haskell
 Systems(TM) like ghc  hbc :-) )
 
 If this is a problem, one should create a type of reference that is stable
 during the processes. A "reference" need not be something specific, such as
 a computer address, but could be viewed as method that can be used to
 address the object.

I think I misinterpreted what you originally wrote. I'd thought that you
were trying to produce an `explanatory theory' explaining what would be
happening if the original req idea (comparing internal representations)
were to be implemented; I see now that you were describing how you could
implement `language defined references' with semantics which mean that the
problems that were pointed out don't happen. Clearly with the new proposal
assigning references at once  for all at creation time is by construction
an ok model.

___cheers,_dave__
email: [EMAIL PROTECTED]   "He'd stay up all night inventing an
www.cs.bris.ac.uk/~tweed/pi.htm   alarm clock to ensure he woke early
work tel: (0117) 954-5253 the next morning"-- Terry Pratchett






RE: Again: Referential Equality

1999-07-27 Thread D. Tweed

On Tue, 27 Jul 1999, Simon Marlow wrote:
  req a b = unsafePerformIO $ do
 a' - makeStableName a
 b' - makeStableName b
 return (a' == b')
 
 That's exactly what to use in a situation like this.  Pointer equality loses
 referential transparency in general (as Simon P.J. pointed out), hence the
 use of the I/O monad in our Stable Name API.
 
 Furthermore, if you intend to encapsulate your use of pointer equality in a
 "safe" abstraction, say a memo table, then use of unsafePerformIO is
 entirely justified.  The req function above is of course an "unsafe"
 abstraction, because it exposes the representation of a and b.

Just an idle curiousity question: when you say it loses referential
transparency am I right in saying it this is only with respect to compile
time transformations ( program proofs,etc) but that there's no problem
_for a given compiled program_ about req a b changing it's value depending
upon the way demand drives the lazy evaluation reduction strategy, or is
there a subtlety there?

(I'm just curious about the use of the IO monad, which regulates
things which depend on the an underlying state which may change with time
in a difficult to predict way depending on the precise pattern of
reduction, rather than say a `compile time monad'.) 

___cheers,_dave__
email: [EMAIL PROTECTED]   "He'd stay up all night inventing an
www.cs.bris.ac.uk/~tweed/pi.htm   alarm clock to ensure he woke early
work tel: (0117) 954-5253 the next morning"-- Terry Pratchett






Re: How to murder a cat

1999-06-11 Thread D. Tweed

[drifting off-topic]
On Fri, 11 Jun 1999, Malcolm Wallace wrote:

 David Tweed writes:
 
  I think it'd probably better software engineering to split the two tasks. 
  Other than a rather nasty syntax, make does what it sets out to do quite
  well:  using specified dependencies and time-stamps on files to run
  `compilation-type' processes in an appropriate way. What would, as you
  say, be very nice is a tool which can be run periodically to auto-generate
  these dependencies.
 
 $ gcc -M main.c   Makefile
 $ ghc -M Main.hs  Makefile
 $ hmake -M MyProg Makefile

Since several people have pointed out the -M option for gcc I'd better
explain that, for reasons of no interest to Haskell users, when tackling
_C++_ it produces dependencies which are much, much worse than they could
be (at least for me). 

Purely personally, I'd find it easier to have a tool that was independent
of the compiler to customise, which could also be extended to other
things, eg, latex, that don't have a -M option. 

___cheers,_dave__
email: [EMAIL PROTECTED]   "`What sort of people would we be if
www.cs.bris.ac.uk/~tweed/pi.htmwe didn't go into the Library?'
work tel: (0117) 954-5253 `Students.' -- Terry Pratchett






Re: make-like facilities (Was: Re: How to murder a cat)

1999-06-11 Thread D. Tweed

On Fri, 11 Jun 1999, Malcolm Wallace wrote:

 Well, compiler-independent is possible (e.g. hmake extracts
 dependencies from any Haskell sources, regardless of compiler.)
 However, language-independent is much more difficult.  How could one
 tool deal with all of C, C++, Haskell, and LaTeX?  Since each language
 has different lexical/parsing rules, not to mention different semantic
 notions of imports or inclusions, it seems to me that the only
 component of such a tool that would be common to all languages is the
 core dependency-graph analyser.

I wasn't thinking of something that would do all of the above at once but
rather a `library-like' base that people could slot into their own code
for determining from a given file what inferences  further checks should
be done from it. Presumably there'd be some common categories, eg,

* included file where changes don't force recompile

* included file where changes do force recompile
 
* file = dependency on (included file basename).(particular extension) if
it exists (eg .h generally implies depends on .o if it exists)

* check file at compile time for regexp (eg `Rerun to get references
correct')

etc,etc.

Being honest though I haven't given it any serious thought (and don't
propose to any time this side of submitting :-) ) 

___cheers,_dave__
email: [EMAIL PROTECTED]   "`What sort of people would we be if
www.cs.bris.ac.uk/~tweed/pi.htmwe didn't go into the Library?'
work tel: (0117) 954-5253 `Students.' -- Terry Pratchett







Re: How to murder a cat

1999-06-10 Thread D. Tweed

On Thu, 10 Jun 1999, Craig Dickson wrote:

 programming, especially lazy functional programming. If it seems desireable
 to re-implement a standard Unix utility in Haskell, I suggest 'make'. One
 could even design and implement a 'make' that would know all about Haskell
 modules, and parse them to generate dependencies automatically.

(As an aside, I suspect Friedrich's reason for trying to write a version
of cat in Haskell is that when you're learning something new you don't
jump straight into completely new territory but start by trying to redo
something you already understand in another context. For example, my first
driving lesson wasn't on a motorway in the middle of a rush hour but along
quiet suburban streets at about cycling pace. The fact that this is a very
atypical driving situation doesn't reduce it's effectiveness early in the
learning process.)

I think it'd probably better software engineering to split the two tasks. 
Other than a rather nasty syntax, make does what it sets out to do quite
well:  using specified dependencies and time-stamps on files to run
`compilation-type' processes in an appropriate way. What would, as you
say, be very nice is a tool which can be run periodically to auto-generate
these dependencies. Especially nice would be if the source were available
so people could have a go at adapting it to other languages, e.g., C++, or
latex files, etc.

___cheers,_dave__
email: [EMAIL PROTECTED]   "`What sort of people would we be if
www.cs.bris.ac.uk/~tweed/pi.htmwe didn't go into the Library?'
work tel: (0117) 954-5253 `Students.' -- Terry Pratchett








Re: More Bulk types in Context Implicit Conversions

1999-05-05 Thread D. Tweed

On Wed, 5 May 1999, Kevin Atkinson wrote:

 Normally given the class.
 
 class Listable c b where
  toList :: c - [b]

 to list will never be able to be resolved unless the signature is given
 when toList is called because there is no way to derive the type of b
from the function call.  However when given an instance declaration of
 
 instance Listable [a] a where
   toList c = c
 
 the compiler should be able to resolve toList [1,2,3] however it
 currently doesn't.

Is this inference valid since you might also have somewhere in your
script

instance Listable [Int] Char where
  toList xs = map chr xs

? Haskell takes the conservative view that adding new code, such as this,
to a program should never break old code (except where it is in direct
contradiction of course).

___cheers,_dave__
email: [EMAIL PROTECTED]   "Someday, even toasters will have
www.cs.bris.ac.uk/~tweed/pi.htm   monads in them thanks to Haskell."
work tel: (0117) 954-5253 M P Jones (allegedly)









Re: more on Rules

1999-05-05 Thread D. Tweed

I'm as excited about the possibility of a limited form of compile time
evaluation via rewrite rules but I'm a getting a bit worried that no-one
has made any examples where there's an laziness to consider: I really
wouldn't want semantic differences depending on the degree of optimization
I compile with.

For example, one of Sergey's rules was

x*(inv x) = unity

Assuming that his group has some sub-structure (e.g., considering
polynomials as a group) so that there's an algorithm involved in computing
* which requires evaluating x, then when evaluated via Haskell's rules,

x*(inv x) where x=bottom

is bottom whereas under term rewriting optimization it is unity. I'm sure
if I gave it more thought I could come up with a realistic programming
example, rather than a mathematical one.

Even if it doesn't affect the actual result it may dramatically affect the
size of expressions held temporarily, eg

tipsOfTheDay
 = map addCopyrightLogo (map toUppercase (map addHaveANiceDay
[tip1,tip2,tip3,,tip_n]))

will, if I understand correctly, transform given the rules
generally envisaged

map f . map g == map (f.g)
map f (x:xs) == f x:map f xs

into

tipsOfTheDay=[addCopyrightLogo(toUppercase(addHaveANiceDay tip1)),..
   addCopyrightLogo(toUppercase(addHaveANiceDay tip_n))]

even if n is 1000 but only three or four tips are ever actually used in
the program, with the consequent increase in the number of closures
stored. The fact that the map rule was written with the intention of
applying in situations like

f xs=sum(map (^2) (12:xs))

doesn't stop it being applied in only situations like the above.
(This is a very contrived example, but I have
written similar code only to later hand-optimize it away by rewriting
the nested maps to use a single map  function composition.)

Has anyone any thoughts on this? Is it actually a non-issue?

___cheers,_dave__
email: [EMAIL PROTECTED]   "Someday, even toasters will have
www.cs.bris.ac.uk/~tweed/pi.htm   monads in them thanks to Haskell."
work tel: (0117) 954-5253 M P Jones (allegedly)








Re:STL for Haskell

1999-04-28 Thread D. Tweed

On Tue, 27 Apr 1999, Hans Aberg wrote:

 Then Haskell uses this to implement sets and maps by using C++ STL style
 balanced trees. As Haskell already has generic variables, just as in the
 case of lists, it needs only be implemented once.

As just a general comment, from my usage of the STL it has two advantages
over C++ code I write (neglecting the obvious one that code written by
other people is always more reliable than the code I write myself :-()

(1) It dramatically simplifies memory allocation/deallocation  also
copying before modifying one of the resulting copies. These things are
never an issue in a functional language. 

(2) It provides algorithms which work on any data structure which supports
any notions fundamental to the algorithm, e.g., ordering, etc. So for
example I can prototype using vector's everywhere and, when I discover
that they're taking up too much memory and that I never actually use the
random access facility I can swap them for lists and everything still
works.

For me what would make an STL-like library useful would be having
collections of algorithms available which operate on any `bulk type' for
which they make sense, but I suspect that to be suitably efficient
handwritten versions would be needed for each type. (Folding over sets vs
folding over lists vs folding over bags ..., etc). It might also make the
error messages for some prelude functions, e.g., map, more slightly
more vague:

`Int' is not an instance of MappableContainer a'

___cheers,_dave__
email: [EMAIL PROTECTED]   "Someday, even toasters will have
www.cs.bris.ac.uk/~tweed/pi.htm   monads in them thanks to Haskell."
work tel: (0117) 954-5253 M P Jones (allegedly)








Re: greencard example does not compile

1999-04-16 Thread D. Tweed

On Fri, 16 Apr 1999 [EMAIL PROTECTED] wrote:

 (Btw, does anybody know why ghc4.02 complains about comments starting
 with a character different from space (like in ``--:: type'')? This is
 certainly not intended, is it?)

Last I heard this was a deliberate feature of Haskell 98 so you can
define, e.g., --, , etc as operators. I think a `--' now only starts
a comment if its can't be parsed as part of a longer symbol (e.g.,
"-- Comment " doesnt work but "--comment" or "-- comment" do.)

___cheers,_dave__
email: [EMAIL PROTECTED]   "Someday, even toasters will have
www.cs.bris.ac.uk/~tweed/pi.htm   monads in them thanks to Haskell."
work tel: (0117) 954-5253 M P Jones (allegedly)



Re: Permission to distribute the Haskell 98 reports as part of Debian?

1999-03-22 Thread D. Tweed

On Fri, 19 Mar 1999, Fergus Henderson wrote:

 Generally programming languages themselves are always free, i.e. very
 few people have ever tried to copyright a language, and when they have,
 the courts have for the most part rejected such attempts (e.g. see [1]).
 It is of course possible to trademark the _name_ of a language, as for
 example Sun have trademarked the name "Java", and as was the case with
 "Miranda".  It is also possible to patent techniques that might be
 required to implement a language.  And finally you can of course copyright
 the language's reference manual.  However, the law doesn't really provide
 any form of intellectual property that can cover the language itself.
 Copyright only protects expression of an idea, not the idea itself.

Could you elucidate on what constitutes the programming language? I'm
curious because I'd always understood that there was a patent on the
syntactic form

f x y = x + y , if x == y
  = x - y , if g x
  = x * y , otherwise

in functional programming languages held by the person which holds the
various licences for Miranda. (I've been told this by two people;  however
I've never actually seen any published reference so I could be wrong.)
However there's not a problem with the isomorphic Haskell
construct

f x y | x==y = x + y
  | g x  = x - y
  | otherwise = x * y

because it's the syntactic construct that's covered, not the idea of
`equations defined by clauses with side conditions'.

Does this mean that if I was (for some bizarre reason) so inclined and I
could produce a language, which didn't use any techniques which could be
disallowed due to `prior art', I could get `patent coverage' on the entire
language by dealing with the with implementation and syntactic appearance
separately?

(As Fergus said, although tedious an understanding of this area seems
useful to those working in any kind of computer science.)

___cheers,_dave__
email: [EMAIL PROTECTED]   "All shall be well, and all shall be
www.cs.bris.ac.uk/~tweed/pi.htm   well, and all manner of things
work tel: (0117) 954-5253 shall be well." 





RE: Haskell 2 -- Dependent types?

1999-02-17 Thread D. Tweed

On Wed, 17 Feb 1999, michael abbott wrote:

 As a C++ user (with a background in categories) patiently waiting for
 something a lot better, I personally favour two principles:
 1.let's go for undecidable type checking.  I want the compiler to be able
 to do as much work as possible: ideally, everything that can be resolved at
 compile time should be if only we can express this correctly.
 2.in the face of the above, we need to give the compiler more guidance.
 Personally, I favour type declarations everywhere: all identifiers should be
 introduced as being of a particular specified type.

My personal ideals for a programming language are:

(1) The compiler catches as many language inconsistencies as possible
rather than resolving them in possibly incorrect ways. 

(2) A program should be as easily readible as reasonably possible,
which strongly suggests as free for `noise' as possible.

(For example, try doing simple things with the pair STL class and see how
soon relatively simple expressions become incredibly opaque because of the
sheer length of the identifiers make_pair, .first, .second and the fact
that, to maintain portability to compilers with slightly older versions of
the type-conversion algorithm, you have to write things with casts that
express the desired type

pairfloat,float f=g+make_pair(float(5.0),float(3.0))

and not just

pairfloat,float f=g+make_pair(5.0,3.0)

In practice of course the first problem can be macro'd away.)

Hopefully the above digression supports my case that being explicit
everywhere just to close gaps that can be automatically closed by simple
(and easily human comprehensible) algorithms can make programs much harder
to read, and hence harder to understand and detect algorithmic errors.

I'd prefer only to have to put in type decls for identifiers only when the
compiler genuinely can't use a simple algorithm to deduce the unique
interpretation that fits,PROVIDING THIS ALGORITHM IS SUFFICIENTLY SIMPLE
THAT YOU CAN APPLY IT IN YOUR HEAD. If not then I suppose being explicit
everywhere does become a better way to go.

___cheers,_dave__
email: [EMAIL PROTECTED]   "All shall be well, and all shall be
www.cs.bris.ac.uk/~tweed/pi.htm   well, and all manner of things
work tel: (0117) 954-5253 shall be well." 





Re: Implementation of list concat and subclass condition

1999-01-22 Thread D. Tweed

On Fri, 22 Jan 1999, David Barton wrote:

 Peter M|ller Neergaard writes:
 
1) The implementation of list concatenation ++.  In the Haskell
   report it is stated that ++ in general is an operator on monads.
   In the case of lists, ++ works as list concatenation.  However,
   I cannot find any place describing whether the implementation of
   ++ is supposed to be better than the straight-forward:
 
   [] ++ ys = ys 
   (x:xs) ++ ys = x : (xs ++ ys)
 
 
 See Okasaki, "Purely Functional Data Structures" for a discussion on
 catenable list and queues.  But keep in mind that you may not need it;
 while the running time of this algorithm is O(n), in most contexts you
 will be accessing the resulting list from the head anyway.  This means
 that the cost of the concatenation can be amortized over the list
 access, leading to amortized O(1) running time.  Even if you don't
 access the entire list, lazy evaluation means the unaccessed part of
 the list won't be evaluated.  Again, Okasaki gives a good summary of
 amortized cost in the presence of lazy evaluation, and of methods of
 proving amortized cost bounds.

With regards to Peter's original motivation for the question though, from
what I remember about the complexity argument from Bird, de Moor and Jones
you don't need anything more efficient than the above definition in order
get the required linear complexity (and the additional criterion that the
algorithm is `on-line'). Pursuing more efficient implementations too far
might well be a red herring. (I might be wrong about this of course.) 

___cheers,_dave_
email: [EMAIL PROTECTED]   "I'm guilty of a lot of things but I've
www.cs.bris.ac.uk/~tweed/pi.htm   never violated the law of conservation
work tel: (0117) 954-5253 of energy!" -- Bart Kosko, Nanotime







Re: Stream of random comments continues

1998-12-04 Thread D. Tweed

On Fri, 4 Dec 1998, Keith Wansbrough wrote:

 Surely it would be better to split the one stream into several infinite ones:
 
   splitStream :: [a] - ([a],[a])
 
   splitStream xs = unzip (spl xs)
 where spl (x:y:xs) = (x,y):(spl xs)
 
 Then you don't have to know how many you are going to use from each stream.

Closer inspection reveals this is not a necessarily a the best idea
(particularly if you're going to repeat the trick several times on various
subsets) because you can get nasty correlations between the various
streams even from quite a good random number generator. There was a paper
published in the JFP about a better way of splitting streams which I think
appeared sometime between January 1996--October 1996. (Sorry I can't be
more specific -- I didn't really keep any notes on all the
frantic background reading I was doing :-) )

___cheers,_dave_
email: [EMAIL PROTECTED]   "I'm guilty of a lot of things but I've
www.cs.bris.ac.uk/~tweed/pi.htm   never violated the law of conservation
work tel: (0117) 954-5253 of energy!" -- Bart Kosko, Nanotime






Re: proposal for language

1998-07-13 Thread D. Tweed

On Mon, 13 Jul 1998, Eric Blough wrote:

 Alastair Reid writes:
   [EMAIL PROTECTED] (S.D.Mechveliani) writes:
Recent Haskell ignores the possibility of the automatic type 
conversion. Thus, 
  1 + 1%2  
is ill-typed.
   
   and goes on to propose a fix.
   
   This expression is perfectly well typed: Hugs 1.4 accepts it without
...
 I think that Sergey is offering a proposal for numeric type coercion,
 such that (for example) for n :: Int and x :: Float, n*x will be well
 typed, without the need to write (fromInteger n)*x.

To add my 2c... I think the original proposal was a general scheme for
defining `implicit coercions' via a multiparameter typeclass construction
which is treated specially by the compiler, and numeric conversions were
the particular motivating example. 

I've no problems with this in principle but would like some way of
ensuriing that, if I were using other people's code, I could ensure that
all their implicit conversions are securely locked away in their modules
only, even if constructors for which they are defined have been exported
into my code. In particular, although it is sometimes a nuisance I prefer
Haskell's complaints about mixing Ints, Floats, etc..., to C's automatic
conversions which mean that trivial typos can completely corrupt numerical
components of a program without any compiler warnings. (I'm afraid I come
from a `formal methods' background and truly believe that I'm barely able
to mentally handle programming reliably and WANT the system to point out
all the obvious things I might have done wrong even if they have make
sense after some sequence of implicit conversions.) So I'd want to be able
to be sure that numbers are treated as currently within any code I write,
regardless of what any other modules may do. 

___cheers,_dave__
email: [EMAIL PROTECTED]   "valid PERL regexp: text not produced by
www.cs.bris.ac.uk/~tweed/pi.htm any of infinitely many monkeys after
work tel: (0117) 954-5253   an infinitely long time." 





Re: Teaching Haskell

1998-06-24 Thread D. Tweed



On Wed, 24 Jun 1998, Erik Meijer wrote:

 and has written substantial programs.  Please, no more "introduction to fp"
 books!
 
 
 This is exactly why the summerschools on advanced functional
 programming are there. After one in Sweden and one in the USA,
 the third school will be in Braga, Portugal:
 
 http://alfa.di.uminho.pt/~afp98/
 
 All the notes were published as Springer LNCS, LNCS 925 and LNCS 1129.

Without wanting to be overly critical, I'd claim this isn't really what
people are looking for. If you happen to be at a university department
where FP is sufficiently strong then there will probably be copies
available. If you are at a differently oriented department (*), or are at
a commercial company interested in looking a functional programming then
LNCS are likely to be difficult to obtain and a little expensive. But you
don't really need high-class, referred publications talking about original
work, just a book which covers some of the more recent developments used
in real programs in a systematic way. Since this might well have higher
volume sales it might be cheaper.

Just my personal view, of course.
David Tweed.

(*) Don't draw any conclusions about my department from this; it's
actually in the first class.





Re: Evaluating Haskell

1997-08-27 Thread D. tweed



On Tue, 26 Aug 1997, David Wilczynski wrote:

 1) JAVA -- Are there any plans to compile Haskell into byte codes for
 execution on the Java Virtual Machine? The Java issue is very important.

This raises an interesting question (although it doesn't really directly
help David). From what I've read, the JVM is designed to be a
platform independent machine code which:

I)  is quickly and efficiently mappable onto a variety of real
architectures.
II) is optimised for representing Java programs and in particular:
III)is designed to be quickly checkable for security concerns (during the
download phase). However this is based on the assumption that what is
being processed is Java; instructions which are safe in the context of
other languages may not qualify as safe for the JVM.

Given II (and III) it's not surprising that functional language - JVM is
so
fraught.
Is there any mileage in trying to promote an additional, alternative
virtual machine which still retains advantages I and III but has taken a
step backwards so that it's more suitable for languages other than Java?
(From looking at the Pizza mailing list, even Pizza (an extension to Java
that allows higher order functions and lazy evaluation) has problems with
simple, logical extensions which cause no conceptual problems but which
just don't fit into the range of things the JVM is prepared to represent).

Its seems to be the case that other communities such as logic programming,
simulation languages, etc, seem to accept that lightweight,
web-transmitable programming is an important future direction but face a
problem when the JVM is the only bytecode they can use. (One first step
might, perhaps, to see how different the bytecodes in the JVM and the
internal bytecode of nhc are.) 

Any thoughts? Any errors in the above?

David Tweed






Re: Evaluating Haskell

1997-08-27 Thread D. tweed

Firstly, sorry about the double post -- my mailer seems to have the idea
that _any_ e-mail adress _anywhere_ in the header should be replied to.

On Wed, 27 Aug 1997, Hans Aberg wrote:

 At 10:35 97/08/27, D. tweed wrote:
 .. From what I've read, the JVM is designed to be a
 platform independent machine code which:
 
 I)  is quickly and efficiently mappable onto a variety of real
 architectures.
 II) is optimised for representing Java programs and in particular:
 III)is designed to be quickly checkable for security concerns (during the
 download phase). However this is based on the assumption that what is
 being processed is Java; instructions which are safe in the context of
 other languages may not qualify as safe for the JVM.
 
   First note that i above is ambiguous: An efficient mapping onto several
 real architectures need not be distributed, and the latter steals
 efficiency. Simon L Peyton Jones is working on "C--" bytecodes which are
 not distributed, but can be used on several platforms.

Yes. What I was trying to say is that it was sufficiently close that just-
in-time compilation becomes psychologically acceptable.

As to the point about C--, for what I've read I think that qualifies more
as an intermediate language (a la core or henk), not a compact byte-code.
Is this correct? Since were talking about the JVM we clearly want (at
least)
runable-as-applet capability. What I was asking is if the using the JVM
rather than a more general bytecode isn't like forcing everything to look
like a nail simply because the only tool you've got is a hammer.

Dave






Using `newtype' efficiently

1997-06-25 Thread D. tweed

Hi,
I'm writing a program for which a major part of both the code and (I
think) the execution time will be taken up by a parser( written using
parsing combinators a la Hutton  Meijer's report on monadic parser
combinators). In order to try to find silly slips through type checking I
wanted to use newtype's for various building blocks. Consequently I have
a few lines of the form

pEmbed (\x-LABEL x) unlabelledparser

where unlabelledParser returns [String], say, which should become
 LABEL[String] and pEmbed applies a function to a the result of a parser,
primarily in order to change its type and/or shape.

My question is: how much is this redundancy going to cost? Clearly the
lambda abstraction is just id, but less obviously (pEmbed (\x-LABEL
x))is now also id. Presumably none of the Haskell compilers can figure
this out though? (I'm currently developing my (incomplete) program with
Hugs (1.4) and as a test took out all the newtypes and associated
redundancy so far and then tested both versions on a small,
NON-PATHOLOGICAL example. Nothing else was changed and all precautions
were taken (e.g., preevaluating CAFs to their limits). The original
version took 24001 reductions and
42541 heap cells whilst the stripped version took 22093 reductions and
38285 cells. So FOR HUGS this has added 8 1/2 % to reductions, 11% to heap
cells. I suspect being an interpreter hugs has to implement `newtype' as
`data', inflating the figures, but can't do some optimisations compilers
can,
deflating the figures.)


Am I writing my code in an obtuse manner (is there a better way?)?
Thanks,
David Tweed
---
homepage: http://www.cs.bris.ac.uk/~tweed/ work tel: (0117) 9545104
"... we think mathematics is fun and we aren't ashamed to admit the
 fact."   -- Donald Knuth, Ronald Graham and Oren Patashnik