Re: [Haskell-cafe] Ticking time bomb

2013-02-03 Thread Richard O'Keefe

On 1/02/2013, at 3:32 PM, Kevin Quick wrote:

 Without details of git's trust/verification model, it's difficult to see how 
 this particular SCM tool provides the trust capabilities being discussed any 
 better than a more focused solution.  Additionally, the use of git is also 
 difficult for many Windows users (80MB installed footprint, last I tried).

Put it on an outboard 1.5TB drive (price about $100) and it will
take about 1/18,000th of the space.  And Windows Update downloads about that
amount or more in patches every time I log in (about once a fortnight).

So it can't really be the size that makes git-on-Windows difficult.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] linking errors while compile hugs98 in macos

2013-02-03 Thread Richard O'Keefe

On 1/02/2013, at 6:19 PM, Brandon Allbery wrote:
 Probably you could, but the effort needed might be significant.  In 
 particular fixing things like environ see 
 https://bugs.ruby-lang.org/attachments/2591/ruby-changes.patch for the kind 
 of change you'll need to make, although I have to say the way they chose to 
 do it is risky at best (but sadly typical).

The Ruby patch pointed to does not explain _why_ it is needed.
'environ' is required by the Single Unix Specification (at least
in issue 7; I don't have an earlier version handy just now).  And
it works just fine in MacOS 10.6 and did in 10.5 (32-bit and 64-bit).




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Most used functions in hackage

2013-02-03 Thread Richard O'Keefe

On 2/02/2013, at 7:05 AM, Rustom Mody wrote:

 Instead lets make a map (functor?) from learning the programming language 
 Haskell to learning the natural language English.
 So I dont know English (and yeah there are Godelian anomalies in that 
 statement) and I gather that vocabulary is a key to mastering the language.
 Now Webster is a bit too fat to cram up as a whole so I decide to isolate the 
 5000 most used English words. 
 Do you think my English mastery will be improved that way?

*Stopping* there will not let you master English,
but *beginning* with a core of high frequency words
is a good idea for bootstrapping.

Basic English had 700 words.
Globish has 1500 words.
The Longman Dictionary of Contemporary English uses a
controlled core of about 2200 words in all definitions.

 Surely Webster had a bigger vocabulary than Shakespeare.

Many people called Webster have written.
Assuming you mean the lexicographer,
just because someone is able to put a word into a dictionary
does not mean that it is part of their active vocabulary.
(Vide Johnson's famous Ignorance, madam, pure ignorance.)
Shakespeare's active vocabulary was on the close order of
20,000 terms.  (This actually seems to be a typical size for
pre-literate natural language entire vocabularies.)  It
would be surprising if _in his other writings_ Webster's
active vocabulary were noticeably larger than Shakespeare's.
 
 IOW mastering the paradigm is more important than the details.

False dichotomy.  Haskell (and English) being Haskell (and
English), you _can't_ learn a significant chunk of the
library (high-frequency vocabulary) without learning something
about the way things are put together.

You can learn the English word 'vomer' without learning
anything important about English, but learn what 'although'
means AND HOW IT IS USED and you have learned quite a bit.

In the same way, you can't really learn 'liftM' without
learning quite a lot about Haskell.

 I have a couple of pages on my blog:
 http://blog.languager.org/2012/10/functional-programming-lost-booty.html
 gives a few of the basics that FPers should know (IMHO) before going to 
 advanced stuff. I should mention that it was written it because Haskell is 
 becoming increasingly hard for beginners with the focus on 'type-magic' is 
 overshadowing the basics. [In any case after 25 years of teaching, I am 
 finding it harder and harder to teach] If you have crossed over the basic 
 stage it may not be much use to you :-)

Not just beginners, either.

There _is_ a problem with the idea of trawling the top N functions
from Hackage.  That is that the code written for Hackage is, by and
large, written to be used rather than understood by beginners.  These
are not necessarily the top N functions that it is useful for a
beginner to use or know.  It's probably better to pick one module at
a time that provides a service you really intend to use and learn
what you can from that.  (Like learning how to read one complete
story.)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] list comprehansion performance has hug different

2013-01-29 Thread Richard O'Keefe

On 29/01/2013, at 10:59 PM, Junior White wrote:

 So this is a problem in lazy evaluation language, it will not appear in 
 python or erlang, am i right?

Wrong.  Let's take Erlang:

[f(X, Y) || X - g(), Y - h()]

Does the order of the generators matter here?
You _bet_ it does.
First off, in all of these languages, it affects
the order of the results.  Let's take a toy case:

g() - [1,2].
h() - [a,b]. % constants
f(X, Y) - {X,Y}. % a pair

[f(X, Y) || X - g(), Y - h()]
 yields [{1,a},{1,b},{2,a},{2,b}]
[f(X, Y) || Y - h(), X - g()]
 yields [{1,a},{2,a},{1,b},{2,b}]

Now let's change it by giving g/0 and h/0 (benign) side effects.
 g() - io:write('g called'), io:nl(), [1,2].
 h() - io:write('h called'), io:nl(), [a,b].
Generating X before Y yields
 'g called'
 'h called'
 'h called'
 [{1,a},{1,b},{2,a},{2,b}]
Generating Y before X yields
 'h called'
 'g called'
 'g called'
 [{1,a},{2,a},{1,b},{2,b}]

If a function call may yield side effects, then the compiler
must not re-order or coalesce calls to that function.
This applies to both Erlang and Python (and to SETL, which
had set and tuple comprehensions before Erlang, Python, or
Haskell were conceived).


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] category design approach for inconvenient concepts

2012-12-18 Thread Richard O'Keefe

On 19/12/2012, at 11:03 AM, Christopher Howard wrote:
 Since I received the two responses to my question, I've been trying to
 think deeply about this subject, and go back and understand the core
 ideas. I think the problem is that I really don't have a clear
 understanding of the basics of category theory, and even less clear idea
 of the connection to Haskell programming. I have been reading every link
 I can find, but I'm still finding the ideas of objects and especially
 morphisms to be quite vague.

Roughly speaking, Category Theory is the study of mathematical analogies
that work.  You notice that the symmetries of a cube have properties that
remind you of the properties of integer addition, and end up constructing
Group Theory to describe a huge range of things that can be modelled as
things acted on by operations to give other things, where the operations
follow particular laws.  And you get this astonishing range of theorems that
have lots of not-obviously-related applications.  Then you notice that
Group Theory and Lattice Theory and a bunch of other things seem to have
more analogies than you expected, and you abstract out what they have in
common (structured collections of things with transformations from one
structure to another that preserve various properties of the structured
collections).

What's the connection to Haskell programming?

Reusability.

Both Category Theory and Haskell push rather harder on abstraction than
most people are comfortable with, in order to get mathematical results in
the one case and functional code in the other that can be applied to lots
of problems.

The price of reusability is vagueness.

Let me offer an analogy.   At primary school I was introduced to the
greatest common divisor and Euclid's algorithm.  Here's this algorithm
that applies to integers and this is what it tells you.  And in a
program you might write gcd(x,y).  These days, I look at gcd and see
oh yes, the meet in the 'divides' lattice and write x/\y, where
the same symbol (meet, /\) can be used for gcd, set intersection,
bitwise and, unification, ...) and I know more _laws_ that /\ obeys
than I did back then, but the integers have receded from view.

 The original link I gave
 http://www.haskellforall.com/2012_08_01_archive.html purposely skipped
 over any discussion of objects, morphisms, domains, and codomains. The
 author stated, in his first example, that Haskell functions are a
 category, and proceeded to describe function composition.

This mailing list often talks about the Hask category.
For example, in Orchard's blog (http://dorchard.wordpress.com) we find

The Hask category

The starting point of the idea is that programs in Haskell
can be understood as providing definitions within some
category, which we will call Hask.  Categories comprise
a collection of objects and a collection of morphisms
which are mappings between objects.  Categories come
equipped with identity morphisms for every object and
an associative composition operation for morphisms
(see Wikipedia for a more complete, formal definition).
For Hask, the objects are Haskell types, morphisms are
functions in Haskell, identity morphisms are provided
by the identity function, and composition is the usual
function composition operation.

In Hask, Haskell functions are the *morphisms* of a category, not its
objects.  That's not to say that you couldn't have a category whose
objects were (in some sense) Haskell functions, just that it would be
a different category.  Rather confusingly, the objects of Hask are
*not* data values, they are data *types*.  This is just like the way
that in the category of sets, the objects are *sets*, not their
elements.

But of course category theory is too general for a neat summary like
objects are sets or types and morphisms are functions between them.
No, objects are _whatever_ you choose to model as not-further-analysed
objects, and morphisms are _whatever_ connections between your objects
you choose to model as morphisms, so long as they obey the laws.

I am struggling with category theory myself.  I'll be learning about
some kind of mathematical structure, getting to grips with the elements
and the operations on the elements, and suddenly the book will move up
a level and instead of looking at patterns between elements, will look
at patterns of morphisms.  And to add insult to injury, the book will
claim that moving from one large space to an exponentially larger
(if not worse) space remote from the things I'm trying to understand
actually makes things _simpler_ to think about.  One of my colleagues
here divides people into set theory people and category theory
people.  He and I both classify ourselves as set theory people.
Maybe in another 50 years I'll be comfortable with category theory,
but by then I'll be dead.

Right now, the issues for you are

 - Haskell people care about 

Re: [Haskell-cafe] category design approach for inconvenient concepts

2012-12-17 Thread Richard O'Keefe

On 18/12/2012, at 3:45 PM, Christopher Howard wrote:
 Recently I read this article I happened across, about the categorical
 design pattern:
 
 http://www.haskellforall.com/2012/08/the-category-design-pattern.html

It's basically the very old idea that an Abstract Data Type
should be a nice algebra: things that look as though they
ought to fit together should just work, and rearrangements
of things ought not to change the semantics in surprising
ways (i.e., Don't Be SQL).

 However, what I'm wondering about is ideas that can be composed but
 that don't seem to fit the idea of category, because they don't obey
 the associativity law.

 Say you created a type called Component (C for short), the idea being
 to compose Components out of other Components. Every C has zero or more
 connectors on it. Two Cs can be connected to form a new C using some
 kind of composition operator (say, .), provided there are enough
 connectors (one on each).

Categories contain two things:
objects
and arrows that connect objects.  Some important things about arrows:
 - for any object x there must be an identity idx : x - x
 - any two compatible arrows must compose in one and only one way:
   f : x - y  g : y - z  =  g . f : x - z
 - composition must be associative (f . g) . h = f . (g . h)
   when the arrows fit together.

Of course for any category there is a dual category,
so what I'm about to say doesn't really make sense,
but there's sense in it somewhere:  the things you are
trying to hook together with your . operator seem to me more
like objects than arrows, and it does not seem as if
they hook together in one and only one way, so it's not so
much _associativity_ being broken, as the idea of . being
a composition in the first place.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can cabal be turned into a package manager?

2012-12-13 Thread Richard O'Keefe

On 13/12/2012, at 7:12 PM, Rustom Mody wrote:

 On Thu, Dec 13, 2012 at 1:29 AM, Brandon Allbery allber...@gmail.com wrote:
 On Wed, Dec 12, 2012 at 12:51 PM, Andre Cunha andrecunha@gmail.com 
 wrote:
 Janek, did you mean something like Rubygems (http://rubygems.org)? It manages 
 the download, installation and manipulation of Ruby packages, called gems. 
 A gem can contain executable programs or libraries (just like traditional 
 packages, like .rpm). Rubygems also handles dependencies between gems, and 
 allows you to update them.
 
 But doesn't solve the actual problem;
 
 Considering that this question is such a FAQ, I believe it would be 
 worthwhile discussing  what we think the 'actual problem' is.
 
 When Algol-60 was being implemented, the challenge was how to compile using 
 only 2000 words of memory (or something as ridiculous as that). The solution 
 was to make a compiler with 20-odd passes.
 [Sorry if the details are wrong; its from (neurological) memory]

Why rely on memory?
The Algol-60 compiler Dijkstra worked on was for the Electrologica X1.
The basic X1 machine, fitting in a large writing desk,
 consisted of an arithmetic unit and several registers,
 in particular two 27-bit accumulators A and S, a condition register,
 an instruction register, and a 16-bit index register.  A and S could
 be used together as a single double-length register for multiply and
 divide operations.  The basic machine had a built-in 'live'
 (i.e. random-access) memory of 512 words of 28 bits (including 1 sign
 bit and 1 parity bit); and 700 words of 'dead' (i.e. read-only) memory.
 More memory could be added in separate storage cabinets, up to 32768
 words, including additional read-only memory.  Normally there was no
 magnetic drum, disk or other type of secondary memory (a magnetic
 drum was an optional extension, however).
[http://www.science.uva.nl/faculteit/museum/X1.php   -- edited lightly]
Apparently the actual machine the compiler was built on had 4 kilo-words.

That compiler read the source paper tape exactly twice.

The greatest shortcoming of the system, however, was the almost
 complete absence of syntax checks and run–time checks, something
that was to be repeated in the development of C.  Another front end
that did more thorough syntax checking was written a few years later;
Lint saw _that_ part of Algol 60 history repeated too.

A leaflet in my letterbox yesterday advertised a headless box with 16GB
of memory for NZD 700.  We've come a long way.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] containers license issue

2012-12-13 Thread Richard O'Keefe

On 14/12/2012, at 7:45 AM, Christopher Howard wrote:
 Just thought I'd mention: It is possible for anyone involved in a FOSS
 project to get pro bono legal advice from the SFLC, from actual lawyers
 who are highly familiar with the legal aspects of FOSS licenses:
 
 https://www.softwarefreedom.org

Intimately familiar with New Zealand law, are they?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] education or experience?

2012-12-09 Thread Richard O'Keefe

On 10/12/2012, at 6:34 AM, Malcolm Wallace wrote:

 
 On 9 Dec 2012, at 16:31, Doug McIlroy wrote:
 
 In fact the FP community came late to some of these, just as 
 programming languages at large came late to garbage collection.
 
 Lazy evaluation--at the heart of spreadsheets since the beginning.
 
 Lazy evaluation for the lambda calculus - 1971 (Wadsworth)
 Lazy evaluation in a programming language - 1976 (HendersonMorris, 
 FriedmanWise)

Pop-2 had lazily evaluated streams about 1970.  In fact that's
how it did input:  a 'character producer' was nothing other than
a lazily evaluated list of characters.  The book about it appeared
in 1971.




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compilers: Why do we need a core language?

2012-11-25 Thread Richard O'Keefe

On 24/11/2012, at 5:26 PM, wren ng thornton wrote:

 On 11/20/12 6:54 AM, c...@lavabit.com wrote:
 Hello,
 
 I know nothing about compilers and interpreters. I checked several
 books, but none of them explained why we have to translate a
 high-level language into a small (core) language. Is it impossible
 (very hard) to directly translate high-level language into machine
 code?
 
 It is possible to remove stages in the standard compilation pipeline, and 
 doing so can speed up compilation time. For example, Perl doesn't build an 
 abstract syntax tree (for now-outdated performance reasons), and instead 
 compiles the source language directly into bytecode (which is then 
 interpreted by the runtime). This is one of the reasons why Perl is (or was?) 
 so much faster than other interpreted languages like Python etc.

I have found Perl anything from the same speed as AWK (reading and writing
lots of data with hardly any processing) to 10 times slower than AWK (with
respect to the 'mawk' implementation of AWK).

The deeply saddening thing here is that there are lots of improvements I
_know_ how to make to mawk to make it even faster, but the thing that stops
me is the way mawk generates word-code directly without an AST.

I don't know why Perl is direct-to-byte-code, but I'm pretty sure why mawk
is direct-to-word-code and The One True Awk interprets its AST.  AWK used
to run on PDP-11s and for large source files had room for VM instructions
or ASTs but not both at the same time.

Designing a compiler for fast *compiling* leads to one kind of
design; designing for fast *running* of generated code leads to another.

And run times for scripting languages depends on things like the structure
of hash tables, the quality of hashing functions, the cripplingly excessive
richness of certain regular expression libraries, the memory management
scheme, ...



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compilers: Why do we need a core language?

2012-11-22 Thread Richard O'Keefe

On 23/11/2012, at 1:56 AM, Jacques Carette wrote:

 On 20/11/2012 6:08 PM, Richard O'Keefe wrote:
 On 21/11/2012, at 4:49 AM, c...@lavabit.com wrote:
 
 Well, I don't know. Would it save some time? Why bother with a core 
 language? 
 For a high level language (and for this purpose, even Fortran 66 counts as
 high level) you really don't _want_ a direct translation from source code
 to object code.  You want to eliminate unused code and you want to do all
 sorts of analyses and improvements.  It is *much* easier to do all that to
 a small core language than to the full source language.
 
 Actually, here I disagree.  It might be much 'easier' for the programmers to 
 do it for a small core language, but it may turn out to be much, much less 
 effective.  I 'discovered' this when (co-)writing a partial evaluator for 
 Maple: we actually made our internal language *larger*, so that we could 
 encode more invariants syntactically.  This ended up making our jobs 
 considerably easier, because we did not have to work so hard on doing fancy 
 analyses to recover information that would otherwise have been completely 
 obvious.  Yes, there were a lot more cases, but each case was relatively 
 easy; the alternative was a small number of extremely difficult cases.

I don't think we do disagree.  We are talking about the same thing:
``not hav[ing] to work so hard on doing fancy analyses''.
The key point is that an (abstract) syntax *designed for the compiler*
and a syntax *designed for programmers* have to satisfy different
design goals and constraints; there's no reason they should be the same.

As a very very crude example of this, with its user-defined operators,
Prolog lets you write rules using lots of (unquoted) operators
to express yourself in an quasi-English sort of way, e.g.,
if Block must move and top of Block is not clear
   then clear top of Block.
Readable.  But lousy for processing.  You'd want to change it to
something like
action(clear_top(Block), [
must_move(Block),
\+ top_is_clear(Block)]).



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cabal failures...

2012-11-21 Thread Richard O'Keefe
Let's put some numbers on this.

(1) In this country, you can buy a second-hand dual core desktop for NZD 200
(roughly USD 165, EUR 130).  You can buy a new laptop for NZD 400
(roughly USD 330, EUR 260).  Not fancy machines, but more than adequate
to compile and build stuff.  Shipping adds a fair bit to prices here.
So it _must_ be possible to buy a Windows box of some kind adequate for
compiling, building, and testing open source software, for even less
than that in North America or Europe.

It's really *NOT* the price of the box-with-Windows-installed.

(2) This department has a mix of Mac OS X, Linux (running on Apple dual-boot
boxes), and Windows (running on Apple dual-boot boxes).  The University
has quite a few Windows labs.   There would be _no_ students at this
University who did not have ready access to a Windows machine whenever
they wanted one.   The servers in the department all run some flavour of
UNIX, true.

(3) Given an intel Solaris, intel Linux, or intel Mac OS X box, VirtualBox
is free.  You can run Windows in VirtualBox.  Microsoft offer a full
Windows 7 Professional licence to University students for USD 30.  So
I really don't buy the idea of a student finding it hard to get Windows.
My University is part of the MSDN Academic Alliance, so staff get stuff
for no money of their own.

Windows 7 Home Premium is USD 200, Professional USD 300.  Probably better
to buy a cheap box that already has Windows.

What about software?

Well, Microsoft Visual Studio Professional 2012 is several times more expensive
than the box it runs on, and Office is not cheap either.  There are, as always,
special deals, e.g., 
https://www.dreamspark.com/Product/Product.aspx?productid=34
seems to make VC++ 2008 available free to students, and the MSDN Academic
Alliance makes this stuff easy for staff to get.  For everyone else,
Eclipse and NetBeans are free, and so are Cygwin and Mingw.

It took me about a day to download and install a large amount of free software,
giving me quite a decent environment.  (Of course, if someone were paying me to
do this, the University would charge NZD 150/hour, so free = NZD 1200 ...)
I even had Microsoft SUA (Services for Unix Applications -- think of it as
Cygwin from Microsoft but with a less horribly ugly terminal font).  I had ghc
and OCaml and SWI Prolog and Squeak and Dolphin Smalltalk and lots of good 
stuff.

So it's not really the availability of software either.

So am I a happy Windows hacker?

Actually, no.

I had a working tolerable setup under Windows Vista.   Despite its bad press, I
have to say I never had any trouble with Vista.  Then my (the department's) Mac
laptop needed something done to it -- I forget what -- and they said while
we're at it, it would simplify our lives if we upgraded the Windows side to
Windows 7 like everyone else has now.  I said, OK, but I _really_ don't want
to lose any of my programs.  And they lost everything beginning with the
letters M-Z, and what they didn't lose stopped working.  Apparently when
Windows went 64 bit they didn't leave \Program Files\ alone and add a
\Program Files 64\ directory.  Oh no!  Now \Program Files\ was exclusively
for 64-bit programs, and 32-bit ones were supposed to be in \Program Files 
(x86)\.
You can guess what that did to the surviving remnants of my environment.

How long did it take to rebuild my environment?
I don't know.  Except for installing Cygwin I haven't done it.
The changes to the user interface -- apparently just for the sake of change,
because absolutely nothing I do has become easier for me -- did nothing for
my facility with the system, and having to spend half an hour installing
updates every time I boot into Windows doesn't increase my enjoyment.
I don't want to even _think_ about Windows 8.





On 21/11/2012, at 3:21 PM, Clark Gaebel wrote:

 +1 to this. The friction of finding, setting up, and using Windows isn't even 
 comparable to just sshing into another unix box and testing something quickly.
 
 As a university student, I also find it relatively rare that I get to test on 
 a Windows machine. My personal computer runs linux, my technical friends run 
 linux or osx, and my non-technical ones run osx. Also, all the school servers 
 that I have access to run either FreeBSD or Linux.
 
 If I want to run something on linux system, I have about 40 different 
 computers that I can ssh into and run code on.
 
 If I want to run something on osx, I just have to call a friend and ask if 
 they can turn on their computer and allow me to ssh in (to my own account, of 
 course).
 
 If I want to run something on Windows, I have to track down a friend (in 
 person!), ask to borrow their computer for a few hours, get administrator 
 access to install the Haskell Platform, get frustrated that HP hasn't been 
 upgraded to 7.6, and give up.
 
 It's just not practical, especially for the large amount of small (500 LOC) 
 

Re: [Haskell-cafe] code length in Haskell, a comparison

2012-11-20 Thread Richard O'Keefe

On 20/11/2012, at 4:55 PM, Gregory Guthrie wrote:

 There is some interesting data in the article at:
 
   Code Length Measured in 14 Languages
http://blog.wolfram.com/2012/11/14/code-length-measured-in-14-languages/
 
 basically comparing program lengths in various languages, and some ensuing 
 discussion of how this relates to language expressiveness, etc.
 (He does all of his analysis in Mathematica, which is the goal of the 
 article.)

I'm not sure how interesting it actually is.
Let's study just one example: the digit sum problem.

This task is to take a Natural Number in a given Base and return the sum of 
its digits.

We are not given any bounds on the size of the Natural Number
nor any bounds on the Range.  Arguably, the code for almost all of the
programming languages shown is *wrong* due to making unwarranted
assumptions about integer size.

The Haskell example there is

digsum base = f 0 where
f a 0 = a
f a n = f (a+r) q where
(q,r) = n `divMod` base
 
main = print $ digsum 16 255 -- FF: 15 + 15 = 30

Since some other examples assume the base is 2..16, so may we.
In that case,

import Data.Char
import Numeric
digsum b n = sum . map digitToInt $ showIntAtBase b intToDigit n 

will do the job, and as an exercise in golfing myself,

import Numeric
digsum b n = sum . map fromEnum $ showIntAtBase b toEnum n 

will also do it, and should work for larger bases.

In this particular test, Mathematica happens to score well
because it already has IntegerDigits[number{, base}?] in its
library.

So at least this exercise is much more about what happens to
be available already in the library for this language than
about anything intrinsic to the language.

There is worse.  Some versions read test data, while some have
a small number of test cases built in, and they do not agree
about which test cases are provided.

Looking at some other problems, some tasks have versions that
use code that is published on the web but not part of the
language standard and not included (or counted!) at the Rosetta
site itself.


There are lots of things you can learn by looking at the
Rosetta site.  I am not sure that the compactness of languages
is one of them.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compilers: Why do we need a core language?

2012-11-20 Thread Richard O'Keefe

On 21/11/2012, at 4:49 AM, c...@lavabit.com wrote:

 What would be the point in doing so?
 
 Well, I don't know. Would it save some time? Why bother with a core language?

For a high level language (and for this purpose, even Fortran 66 counts as
high level) you really don't _want_ a direct translation from source code
to object code.  You want to eliminate unused code and you want to do all
sorts of analyses and improvements.  It is *much* easier to do all that to
a small core language than to the full source language.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] local type denotation

2012-11-14 Thread Richard O'Keefe

On 15/11/2012, at 1:03 AM, Serge D. Mechveliani wrote:

 Please,
 how to correctly set an explicit type for a local value in the body of 
 a polymorphic function?

Other people have told you how to do it.
I'd like to tell you why you don't need to.

 
 Example (tested under  ghc-7.6.1):
 
  data D a = D1 a | D2 a (a - a)
 
  f :: Eq a = D a - a
  f (D1 x)   = x
  f (D2 x g) = let -- y :: Eq a = a
   y = g x
   in  if x == y then x else g y

You say that you want y to have exactly the type a.
Look around.  Is there some data in scope with that type?
Yes: (D2 x g) :: a = x :: a.
So you just want to say y has the same type as x.

There's a Prelude function

asTypeOf :: a - a - a
asTypeOf x y = x

So e1 `asTypeOf` e2 gives you the value of e1,
having first ensured that e1 and e2 have the same type.

So

f :: Eq a = D a - a
f (D1 x)   = x
f (D2 x g) = if x == y then x else g y
 where y = g x `asTypeOf` x

You apparently already know that you don't need any of this
(thanks to x == y), but want to be explicit.  The question
is how explicit you want to be.  Using asTypeOf is sort of
half way between implicit typing and showing the type you
want _as_ a type.
 
The other question, I suppose, is _why_ you want to be explicit?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Unicode case (in)stability and Haskell identifiers.

2012-11-02 Thread Richard O'Keefe
I've been putting together a proposal for Unicode identifiers
in Erlang (it's EEP 40 if anyone wants to look it up).  In
the course of this, it has turned out that there is a technical
problem for languages with case-significant identifiers.

Haskell 2010 report, chapter 2.
http://www.haskell.org/onlinereport/haskell2010/haskellch2.html

varid → (small {small | large | digit | ' })\⟨reservedid⟩
conid →  large {small | large | digit | ' }

small→ ascSmall | uniSmall | _
ascSmall → a | b | … | z
uniSmall → any Unicode lowercase letter

large→ ascLarge | uniLarge
ascLarge → A | B | … | Z
uniLarge → any uppercase or titlecase Unicode letter

This is actually ambiguous: any ascSmall is also a uniSmall
and any ascLarge is also a uniLarge.  I take it that this
is intended to mean any Unicode xxx letter other than an ASCII one
in each case.

That's not the problem.  The definition currently bans Hebrew,
Arabic, Chinese, Japanese, all the Indic scripts, and basically
only allows Latin, Greek, Coptic, Cyrillic, Glagolitic,
Armenian, arguably Georgian, and Deseret (but not Shavian).
That's not the problem either.

The problem is that being a Unicode lower case, upper case,
or title case letter is not a stable property.

Unicode annex UAX#31 guarantees that
   X is a well-formed case-insensitive identifier now
  =   X will always be a well-formed case-insensitive
   identifier
and that
   X is a well-formed case-sensitive identifier now
   =  X will always be a well-formed case-sensitive
   identifier

What it does NOT guarantee is that it will continue to be
begin with the same *case* or even that a letter will
continue to be classified as a letter.  So it is at least
technically possible for a valid Haskell 2010 varid
(conid) to turn into a conid (varid) or even cease to be
a legal Haskell identifier at all.  Unicode standard
Annex UAX#31 guarantees stability of being-an-identifier
by having an exceptional set for any letter that stops
being a letter to go into.  For example, there are
SCRIPT CAPITAL {B,E,F,H,I,L,M,P,R} characters, all of
which are capital letters except for SCRIPT CAPITAL P,
which is a symbol, but it's in the exception set so it's
still OK to use.  All of the SCRIPT CAPITAL letters were
in General Category So in Unicode 1.1.5 (the earliest for
which online data is available). In Unicode 2.1.8, all of
them were Lu except for SCRIPT CAPITAL P, which was Ll.
By Unicode 3.0.0, SCRIPT CAPITAL P was back to So.  Some
time later it switched over to Sm.  So we've had

SCRIPT CAPITAL P
- not a letter (1.1.5)
- is a lower case letter (2.1.8)
- not a letter again (3.0.0)

at least according to the on-line UnicodeData-version.txt
files.  Putting ℘ into the exceptional set means that a
UAX#31 identifier may still contain it, but not so a Haskell one.

There are two aspects to this instability.

(1) Because Haskell hews its own line instead of tailoring
UAX#31 the way Ada and Python do, Haskell cannot benefit
from the UAX#31 stability guarantee.  There _has_ been a
character that used to be legal in a Haskell identifier
that is not now.  That's Haskell's problem, not Unicode's,
and the Haskell community does not have to wait for anyone
else to address is.

(2) Even if you adopt one of the UAX#31 definitions verbatim,
the case distinction Haskell needs to make is not stable.

It appears that nobody who worked on UAX#31 was thinking about
languages like Prolog, Erlang, Clean, Haskell, F#, or Scala,
and that if the Unicode Consortium are told of the problem,
they will probably be happy to add some sort of don't break
these languages guideline.

Next week I intend to submit a proposal to the Unicode
consortium to consider this issue.

Would anyone care to see and comment on the proposal
before I send it to Unicode.org?  Anyone got any suggestions
before I begin to write it?

For the sake of argument, suppose that we are going to
stick with Xid_Start Xid_Continue* for the union of
variables and atoms (which is pretty much what Ada and
Python do), and the sole issue of concern is that there
should be a stable way to classify such a token as
beginning with default case or beginning with marked case.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Optimal line length for haskell

2012-10-30 Thread Richard O'Keefe

On 30/10/2012, at 5:56 PM, Alexander Solla wrote:
 For example, I generally prefer using the combinators directly when dealing 
 with functors, applicatives, and monads.  This can be written wide, but it 
 can also be written in the style of:
 
  f' = f $ (a = g)
 * (b = h)
 * (c = i = k)
 
 That is perfectly sensible.  But if I had to repeat this syntactic construct, 
 I would probably do it wide:
 
  f' = f $ (a = g) * (b = h) * (c = i = k)
  g' = g $ (d = g) * (e = h) * (f = i = k)
   021346789021346789021346789021346789021346789021346789
 
 The new row exposes a sort of tabular structure.

The trouble is that this is not real code.
As it stands, nobody is complaining about this example.
It fits well into 65 columns, being 60 columns wide.

I really would like to see real examples.

I found the lines of the SegmentTree code that was mentioned
recently to be too long for comfort.  Line wrapping in TextEdit
(or for that matter Emacs) takes no heed whatever of Haskell
syntax.  I also found that the lines could very easily have
been broken in quite natural places.fine f' and g'.), as in:
 
  f' = comb f a b c
  g' = comb g d e f
 
 This cannot be made any narrower (up to naming).

Yes, it can.

f' = comb
f a
b c
g' = comb
g d
e f

There's an indentation meta-rule that I am very fond of:

Where a line is *broken* depends on the spelling of names;
where a line is *indented* depends only on logical structure.

 We  can call a normal form n-wide if its combinator requires n arguments.

And they can each go on a separate line if need be.
 
 This is fair enough, but there are some types of extremely uninteresting code 
 that don't go on slides and are naturally expressed as extremely wide tables. 
  Typically, it is data for use by the program -- the stuff the program 
 traverses to output its answer.

I have nothing to say about machine-generated code (and if it's _that_ 
uninteresting,
and _that_ naturally expressed by extremely wide tables, then let's hope no 
human had
to suffer to create it).


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Optimal line length for haskell

2012-10-29 Thread Richard O'Keefe

On 30/10/2012, at 3:28 AM, Alexander Solla wrote:
 On Mon, Oct 29, 2012 at 6:52 AM, Michael Orlitzky mich...@orlitzky.com 
 wrote:
 In any language, a line longer than 80 characters usually (but not
 always) suggests that you might want to stop and rethink your design. In
 many cases a refactoring or two will greatly simplify the code and
 reduce your line length as a result.
 
 I disagree.  That might be true for imperative languages, where width is 
 indicative of deep nesting and its associated problems.  But it is not true 
 for a functional language, where it is merely indicative of a wide normal 
 form.  Yes, the normal form can sometimes be refactored, but to what end?

Better code?  I have no idea of what a wide normal form might be, and less
idea of why that would imply that a narrower and better form does not also
exist.

We can argue till everyone is sick and tired and still not reach any
kind of consensus.  Let's have some *examples*.

(For the record, the longest line lengths I've ever seen have been in
C++ and Java.  I know someone who, and I am not kidding, thinks a 390-
column line in code intended to be read by other people is OK.)

My own perspective is that if I can't fit it onto one slide in large
type for my students to see it is too big.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] forkProcess, forkIO, and multithreaded runtime

2012-10-16 Thread Richard O'Keefe
The problems with forkProcess really are not Haskell's fault.
You will find warnings in the documentation for C's fork():

There are limits to what you can do in the child process.
To be totally safe you should restrict yourself to only
executing async-signal safe operations until such time
as one of the exec functions is called.  All APIs,
including global data symbols, in any framework or
library should be assumed to be unsafe after a fork()
unless explicitly documented to be safe or async-signal safe.

That's actually pretty scary.  I'd always assumed that this was
one of the reasons why the posix_spawn() function and its support
crew were devised.  Which reminds me that I expected to find
posix_spawn() in System.Posix.Process but didn't.

http://www.haskell.org/ghc/docs/7.4-latest/html/libraries/unix-2.5.1.1/System-Posix-Process.html
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to take a minimum sub list that only contain certain number of elements of certain type?

2012-09-25 Thread Richard O'Keefe
 2012/9/25 Ivan Lazar Miljenovic ivan.miljeno...@gmail.com
 On 25 September 2012 16:51, Magicloud Magiclouds
 magicloud.magiclo...@gmail.com wrote:
  Hi,
For example, I have an array [0..]. Now I want to take a sub list
  that starts from index 0, and only contain 4 odds, and is minimum
  result. The answer should be [0, 1, 2, 3, 4, 5, 6, 7].

What do you actually *mean*?
When you say you have an array, but actually display a *list*,
do you really mean you have something fitting into Data.Array,
or do you really mean you have a list?

When you say sub list do you mean a *slice* (a contiguous
chunk) or a *subsequence* (elements preserve their order but
there may be gaps).  Or looking at your example, do you mean
a *prefix* (n `take` xs for some n)?

When you say odds I presume you mean odd integer, although
the even/odd concept applies to Gaussian integers too (m+ni
is even if it is divisible by 1+i, which turns out to be
equivalent to m+ni being even (odd) iff m and n have the
same (different) parity).

When you say is minimum result, what does that mean?  Does
it mean the sum of the elements is minimal?  (If you consider
the list [1,3,5,7,-2,-4,-6,-8,-10,...] it is clear that a
minimum result in that sense could be infinitely long.)

Let's take just one interpretation:
 - the array is a list
 - whose elements are Integers
 - the result must be a prefix of the input
 - which contains four odd Integers
 - and is otherwise as short as possible.

We'll generalise `take`: it will have an integer n,
a predicate p, and a list xs.

ptake :: Int - (a - Bool) - [a] - [a]

ptake n p (x:xs) | n  0 = x : ptake (if p x then n-1 else n) p xs
ptake _ _ _  = []

answer :: [Integer] - [Integer]

answer xs = ptake 4 odd xs

Now this is pretty trivial (it's _exactly_ `take` except for
only counting elements where p is true), so that interpretation
of the problem cannot be the right one.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to take a minimum sub list that only contain certain number of elements of certain type?

2012-09-25 Thread Richard O'Keefe

On 26/09/2012, at 5:56 AM, Gwern Branwen wrote:

 On Tue, Sep 25, 2012 at 1:42 PM, Rishabh Jain rishab...@live.com wrote:
 f x 0 = []
 f (x:xs) y | x `mod` 2 == 0 = x : (f xs y) | otherwise = x : (f xs (y-1))
 
 f [0..] 4
 [0,1,2,3,4,5,6,7]
 
 Tsk, tsk. So ugly. How's this:
 
 let f x = take x . filter odd

Wrong.  The original poster gave an explicit example
in which even elements were *retained* in the output,
they just weren't *counted*.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to take a minimum sub list that only contain certain number of elements of certain type?

2012-09-25 Thread Richard O'Keefe

On 26/09/2012, at 12:28 PM, Gwern Branwen wrote:

 On Tue, Sep 25, 2012 at 8:17 PM, Richard O'Keefe o...@cs.otago.ac.nz wrote:
 Wrong.  The original poster gave an explicit example
 in which even elements were *retained* in the output,
 they just weren't *counted*.
 
 You are at least the fourth person to email me now to point this out.
 I'm glad I could make four people's day better with the joy of
 correction...
 
 But I never said it was a full solution - please note that I did
 include the output of the function!

The tsk tsk is probably what made people so keen to reply.

 One could consider it a partial solution, however: that gives you the
 _nth_ odd, so if you want a list of numbers up to the _nth_ odd, that
 gives you a stop condition - 'takeWhile =/ nth+1' etc.

That doesn't work either.  Consider the list [1,1,1,1,1].
The element just after the 5th odd number in the list is 1;
takeWhile (/= 1) will thus return [] instead of [1,1,1,1].

|A 2N traverse
 (which laziness might fuse to just 1 traverse, dunno).

Not in this case, for fairly obvious reasons.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [ANNOUNCE] Fmark markup language

2012-09-18 Thread Richard O'Keefe

On 19/09/2012, at 12:04 AM, José Lopes wrote:

 Hello Richard,
 
 When you say (for) some people (...) you special syntax is not natural
 that's a good thing. I want these people involved in the project. I want
 to understand what they find natural in order to weigh the options and
 make a proper decision.

One important question is how many *scripts* you want to support.
Do you, for example, want to support Greek?  There is a Unicode
character for Greek question mark U+037E, some documentation I've
seen says that the preferred character is U+003F.  So does ;
terminate a sentence or not?
How many *languages* do you want to support?
Are you going to support Armenian, where the question mark that
ends a sentence goes not _after the last letter_ of the last
word but _over the last vowel_?

(As someone who only writes in English, I have no trouble with the
answer only the Latin script, only the English language.)

I don't actually believe that there _is_ a natural convention for
italics, boldface, superscripts, etc.  Even _this_ is an artificial
convention going back to mechanical typewriters that could underline
but not change font.
 
 On the README file in the github page you will find a brief explanation
 of the markup elements. I need to elaborate you that though because
 I feel I explain it too fast.
 https://github.com/jabolopes/fmark

I did read the README.md.  I disagree that there exists any problem
of manual removal of special characters.  If I want to remove XML markup,
I just do unxml foobar.xml foobar.txt.  If I want to convert from one
kind of markup to another, there is pandoc, which is admittedly a touch
buggy, ...

The problem is that the README.md does not in fact explain what any
of the 'markup elements' are.

Let's take one little example, a letter of the kind I used to write
by hand.

123 Copper Road,
   Gemstone,
  Dunedin,
 New Zealand .
31 February 2016.

Dear Schnorer,
   I am writing to you because I am tired of frobulating.

Yours without wax,
   Dr Strabismus of Utrecht
  (whom God preserve!)

How do I tell Fmark about this structure?
How do I get this layout?  (The '1' and 'Y' should be in the centre.)

 Let me answer your questions about sections. The amount of indentation
 is not fixed. You can use whatever you want. There is also no nesting limit
 in Fmark, however, there is a nesting limit in the Latex backend. For now,
 quotations, block quotes, source code embedding, etc, are not implemented.
 Those will be added in the future.

These answers belong in the README.me.
 
 About embedding a Fmark document in another document. That seems to
 be a very cool feature. I will definitely think about it! Maybe you can come
 up with a natural way of doing it?

The last time I thought I had a natural way to do anything like that
I found that the SGML design was worse broken than I would have believed 
possible.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [ANNOUNCE] Fmark markup language

2012-09-18 Thread Richard O'Keefe

On 19/09/2012, at 1:43 AM, Stefan Monnier wrote:

 The problem with that is that some people DO end some headings with
 a full stop; for them your special syntax is not natural.
 
 Markdown/ReST is already using the no syntax idea (e.g. compared to
 pre-wiki markup such a LaTeX or Texinfo), so he's simply trying to push
 this idea further.

Markdown is very heavy on syntax,
what it is *light* on is specification of what the
syntax actually is.  As a result,
I'm aware of three different dialects,
and someone told me about having to reverse
engineer the syntax from a Perl implementation.
As a further result, I cannot write a program to
reliably *generate* Markdown.
 
 I suspect it'll be difficult.

Oh, more power to him for trying.
I just don't think it can be pushed very far.

Oh, there is a really *filthy* hack that could be pulled
for italics, bold face, and so on.  Contrary to its original
principles, Unicode includes several copies of ASCII
(see http://unicode.org/charts/PDF/U1D400.pdf):
Mathematical bold,
Mathematical italic,
Mathematical bold italic,
Mathematical script,
Mathematical bold script,
Mathematical fraktur,
Mathematical double struck (blackboard-bold),
Mathematical bold fraktur,
Mathematical sans-serif,
Mathematical sans-serif bold,
Mathematical sans-serif italic,
Mathematical sans-serif bold italic,
Mathematical monospace,
and some similar sets of Greek.
So as long as you don't want strike-through or underlying,
and as long as you don't want italic Cyrillic c, ...
Too bad if you want a bold italic capital Thorn...

 
 What if I want to use indentation to express quotation instead?
 
 I think this one is solvable: a paragraph that's more indented than the
 previous heading can be considered a quote.

Ah, but the quotation might not end with a sentence terminator,
so that would be considered a new heading.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [ANNOUNCE] Fmark markup language

2012-09-17 Thread Richard O'Keefe

On 18/09/2012, at 3:09 PM, José Lopes wrote:
 Fmark (Friendly Markup) is a very simple markup language without
 syntax and simple but sophisticated document styling, capable of
 producing PDF and XML files.

Do you _really_ mean without syntax?

Nope, thought not:

Fmark relies merely on spacing, indentation,
and common punctuation (e.g., periods, exclamation
and interrogation marks) to reconstruct the structure
of your document.

That's SYNTAX.

A markup language without syntax is like a meal without food.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [ANNOUNCE] Fmark markup language

2012-09-17 Thread Richard O'Keefe

On 18/09/2012, at 3:57 PM, José Lopes wrote:
 The problem with Fmark is also its greatest feature. While other markup 
 languages
 introduce special syntactic characters to give meaning to the document's 
 elements,
 I would like to take a different approach: I want to use characters that 
 people already
 use in document writing to achieve the same result. For example, in Mediawiki 
 a
 heading is some text surrounded by equal signs. But in Fmark a heading is 
 simply some
 text that does not end in a punctuation character, such as period or an 
 exclamation mark.
 I argue that this is a more natural approach.

The problem with that is that some people DO end some headings with
a full stop; for them your special syntax is not natural.
 
 I want to find a natural way of not burdening the user with the task of 
 having to learn
 some special syntax in order to write a document.

You haven't found it.  What you *have* is very special syntax expressed using
several methods, AND IT IS NOT DOCUMENTED.  I have read the examples, and I can
find nothing explaining what the syntax is.

For example, I find indenting subsections rather unnatural and error-prone.
(For example, moving a paragraph from a deep location to a shallow one would
create a new subsection unintentionally.)
Is the amount of indentation fixed?  How many levels of subsections are
supported?  What if I want to use indentation to express quotation instead?
How do I embed source code?  How can you get an example of Fmark in an
Fmark document without having it acted on?  I could go on and on with
questions about syntax.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What is the good way to work with list comprehension and UTCTime?

2012-09-16 Thread Richard O'Keefe

On 15/09/2012, at 5:14 AM, Chris Heller wrote:

 You might want to have a look at the time-recurrence package: 
 http://hackage.haskell.org/package/time-recurrence
 
 For your simple cases you would do something like:
 
 Each second:
 
 starting (UTCTime ...) $ recur secondly
 
 Each minute:
 
 starting (UTCTime ...) $ recur minutely

Ouch.  Look up minutely (my-NEWT-ly) in an English
dictionary.  Look up secondly while you're there.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Build regressions due to GHC 7.6

2012-08-30 Thread Richard O'Keefe

On 30/08/2012, at 5:26 PM, Bryan O'Sullivan wrote:
 The reasons for these problems fall into three bins:
   • Prelude no longer exports catch, so a lot of import Prelude hiding 
 (catch) had to change.

This could have been avoided if

import module hiding (importables)

were interpreted simply as a requiring that the specified importables
not be imported *whether they could have been or not* rather than as
requiring that they exist to be sneered at.  It seems rather odd that
the removable of something that a module insists it doesn't want should
break that module.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Sliding Window functional data structure

2012-08-30 Thread Richard O'Keefe
Consider the following interface

type Ord k = Sliding_Window k v

entries :: Sliding_Window k v - [(k,v)]
The cost is expected to be linear in the length of
the result.  The pairs are listed in increasing
order of k.

add :: Ord k = k - v - Sliding_Window k v - Sliding_Window k v
precondition: all ( k) [k' | (k',_) - entries q]
The cost should be at most O((log . length . entries) q).
post: entries (add k v q) = entries q ++ [(k,v)]

since :: Ord k = k - Sliding_Window k v - [(k,v)]
answers [(k',v) | (k',v) - entries q, k'  k]
The cost should be at most O((log . length . entries) q
 + length result)

purge :: Ord k = k - Sliding_Window k v - Sliding_Window k v
answers q' such that entries q' = [(k',v) | (k',v) - entries q,
   k'  k]
The cost should be at most O((log . length . entries) q
 + length [k' | (k',v) - entries q,
k' = k])

Ignoring costs, this can obviously be done trivially using
a list of pairs.  Paying some attention to costs, it can also
be done using some sort of balanced search tree.  The data
structure is close to a priority queue, but subtly different.

I believe I can see how to do this in an imperative language
using a Dijkstra-style array of pairs:  add is amortised O(1)
using the well known doubling strategy, thanks to the strictly
increasing key requirement; since is a binary search followed
by a segment copy; purge is a binary search followed by nilling
out the now unwanted elements.  By adapting the back-to-back
pair of lists implementation of queues, we can obviously do
add in O(1) and purge in O(1+#deleted items) time in a pure
functional language.

Thing is, there's a baffling array of data structures to examine
(AVL, RB, 2-3, 2-3-4, 1-2-brother, finger ... trees) and I wondered
if anyone had any idea what would be particularly good for this
rather asymmetric problem.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] createProcess running non-existent programs

2012-08-13 Thread Richard O'Keefe

On 13/08/2012, at 11:26 PM, Alexander Kjeldaas wrote:

 
 This isn't that hard - a pipe shouldn't be needed anymore.  Just require a 
 post-2003 glibc.
 
 fexecve is a system call in most BSDs.  It is also implemented in glibc using 
 a /proc hack.

fexecve is now in the Single Unix Specification, based on
POSIX as of 2008, I believe.  However,
http://www.gnu.org/software/gnulib/manual/html_node/fexecve.html
says
Portability problems not fixed by Gnulib:
  *  This function is missing on many non-glibc platforms: MacOS X 10.5, 
FreeBSD 6.0,
 NetBSD 5.0, OpenBSD 3.8, Minix 3.1.8, AIX 5.1, HP-UX 11, IRIX 6.5, OSF/1 
5.1,
 Solaris 11 2010-11, Cygwin 1.5.x, mingw, MSVC 9, Interix 3.5, BeOS.

That warning doesn't seem to be fully up to date.  I'm using MacOS X 10.6.8
and fexecve() isn't in the manuals or in unistd.h.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] 3 level hierarchy of Haskell objects

2012-08-08 Thread Richard O'Keefe

On 9/08/2012, at 11:11 AM, wren ng thornton wrote:
 
 Notably, a type class instantiated with all its arguments is not itself a 
 type!

All the comparisons of Haskell typeclasses with Java classes answered
in one brief lucid sentence.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What Haskell Records Need

2012-08-02 Thread Richard O'Keefe

On 2/08/2012, at 5:34 PM, Jonathan Geddes wrote:
 Ouch! And that's not even very deeply nested.
 Imagine 4 or 5 levels deep. It really makes
 Haskell feel clunky next to `a.b.c.d = val`
 that you see in other languages.

I was taught that this kind of thing violates the Law of Demeter
and that an object should not be mutating the parts of an
acquaintance's parts, but should ask the acquaintance to do so.
I'd say that a.b.c.d = val is at the very least a sign that
some encapsulation did not happen.

Semantic editor combinators are ingenious, but they make
me feel somewhat uneasy, in that they really are in some
sense about the *syntax* (or maybe the concrete representation)
of things rather than their *semantics* (or maybe I mean the
abstract value).


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Capturing the parent element as I parse XML using parsec

2012-07-29 Thread Richard O'Keefe

On 29/07/2012, at 6:21 PM, C K Kashyap wrote:
 I am struggling with an idea though - How can I capture the parent element of 
 each element as I parse? Is it possible or would I have to do a second pass 
 to do the fixup?

Why do you *want* the parent element of each element?
One of the insanely horrible aspects of the Document Object Model is that every
element is nailed in place by pointers everywhere, with the result that you
cannot share elements, and even moving an element was painful.
I still do a fair bit of SGML/XML process in C using a Document Value Model
library that uses hash consing, and it's so much easier it isn't funny.

While you are traversing a document tree it is useful to keep track of the
path from the root.  Given

data XML
   = Element String [(String,String)] [XML]
   | Text String

you do something like

traverse :: ([XML] - [a] - a) - ([XML] - String - a) - XML - a
traverse f g xml = loop [] xml
  where loop ancs (Text s)   = g ancs  s
loop ancs e@(Element _ _ ks) = f ancs' (map (loop ancs') ks)
   where ancs' = e:ancs

(This is yet another area where Haskell's non-strictness pays off.)
If you do that, then you have the parent information available without
it being stored in the tree.





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] COBOL-85 parser, anyone?

2012-07-24 Thread Richard O'Keefe

On 25/07/2012, at 1:47 AM, S D Swierstra wrote:

 You can find a CFG at: http://www.cs.vu.nl/~x/grammars/vs-cobol-ii/index.html

Thank you.

I see it's closer to COBOL-85 than I realised.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] COBOL-85 parser, anyone?

2012-07-20 Thread Richard O'Keefe
Does anyone have a parser for COBOL-85 written in Haskell,
or written using some freely available tool that communicates
easily with Haskell?

I don't need it _yet_, but I'm talking with someone who is
trying to get access to a real legacy site with a bunch of,
well, basically COBOL 85, but there are extensions...
We're not talking about transformation at this stage, just
analysis.

I could probably hack up the extensions given a place to start.

I've found some papers and more dead links than I care for
and lots of mention of ASF+SDF which is apparently superseded
by Rascal.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] stripSuffix

2012-07-17 Thread Richard O'Keefe

On 18/07/2012, at 12:37 PM, Brandon Allbery wrote:

 On Tue, Jul 17, 2012 at 8:33 PM, Alvaro Gutierrez radi...@google.com wrote:
 Pardon me if this has been answered before: how come there's a
 stripPrefix in Data.List, but no matching stripSuffix?
 
 Probably because prefixes are easier to do, given the nature of singly linked 
 lists. 

Here are two other possible reasons.

It's not just easier, stripPrefix pfx lst is *possible* as long
as pfx is finite, even when lst is infinite.  The same would not
be true of a suffix stripper.

It's so easy to write

stripSuffix sfx lst =
  case stripPrefix (reverse sfx) (reverse lst) of
Nothing - Nothing
Just ys - Just (reverse ys)

I can think of two cases where I'd want something like this.
One is manipulating file extensions, where I'd want to use
System.FilePath.splitExtension or something like that anyway.
The other is suffix stripping for text processing, where I'd
want to use a trie to match a whole lot of possible suffixes.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell-Cafe Digest, Vol 106, Issue 38

2012-06-26 Thread Richard O'Keefe

On 27/06/2012, at 12:51 PM, John Lato wrote:
 
 data Tree a = Leaf a | Branch (Tree a) ( Tree a)
  deriving (Foldable, Show)

While I am familiar with deriving (Show),
I am not familiar with deriving (Foldable),
which looks rather useful.

http://www.haskell.org/ghc/docs/7.4.2/html/users_guide/deriving.html
just says With -XDeriveFoldable, you can derive instances of the
class Foldable, defined in Data.Foldable. but it provides no details.

Would you care to explain more about deriving (Foldable)?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell-Cafe Digest, Vol 106, Issue 38

2012-06-26 Thread Richard O'Keefe

On 27/06/2012, at 3:18 PM, John Lato wrote:

 On Wed, Jun 27, 2012 at 9:15 AM, Richard O'Keefe o...@cs.otago.ac.nz wrote:
 
 On 27/06/2012, at 12:51 PM, John Lato wrote:
 
 data Tree a = Leaf a | Branch (Tree a) ( Tree a)
  deriving (Foldable, Show)
 
 While I am familiar with deriving (Show),
 I am not familiar with deriving (Foldable),
 which looks rather useful.
 
 http://www.haskell.org/ghc/docs/7.4.2/html/users_guide/deriving.html
 just says With -XDeriveFoldable, you can derive instances of the
 class Foldable, defined in Data.Foldable. but it provides no details.
 
 Would you care to explain more about deriving (Foldable)?
 
 There's not much to explain, DeriveFoldable basically does just that;
 automatically provide an instance of the Foldable class for a data
 type.

That was sufficiently obvious, yes.
The question remains, ***WHAT*** instance?

  I think the original proposal for DeriveFoldable was from Twan
 van Laarhoven, 
 http://www.mail-archive.com/haskell-prime@haskell.org/msg02116.html,

which goes into a great deal of detail about what deriving (Functor)
does, but none whatsoever about what deriving (Foldable) does.  Even
for Functor, another 3 or 4 nontrivial examples would be nice.

 and there's a little bit of history on GHC's trac,
 http://hackage.haskell.org/trac/ghc/ticket/2953.  The current
 implementation probably hasn't changed much since Simon PJ's original
 patch, although there's probably substantial overlap with ghc's
 generics these days.

That trac entry contains one sentence that seems to still apply:

  What is missing is a section in the user manual describing the changes.

It refers to section 8.5, which is now 7.5, and there is still no
adequate documentation there.
 
 As for the Foldable class itself, the docs at
 http://www.haskell.org/ghc/docs/latest/html/libraries/base/Data-Foldable.html
 are pretty good. 

Yes, Foldable *is* documented.
However, that page says nothing whatever about deriving (Foldable).




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why does Enum succ and pred functions throw exception

2012-06-21 Thread Richard O'Keefe

On 21/06/2012, at 9:11 PM, Rouan van Dalen wrote:
 I was hoping to have some functions like:
 
   safeSucc :: (Enum a) = a - Maybe a
 

Types that are instances of Enum don't necessarily have bounds.
It always struck me as odd that Enum doesn't extend Ord.
There's a reason given for not having Bounded extend Ord,
which doesn't really apply to a class having fromEnum :: a - Int.
Testing whether an enum bound is at a limit is thus a bit tricky.

isMaxBound, isMinBound :: (Enum t, Bounded t) = t - Bool

isMaxBound x = fromEnum x == fromEnum (maxBound `asTypeOf` x)

isMinBound x = fromEnum x == fromEnum (minBound `asTypeOf` x)

safeSucc, safePred :: (Enum t, Bounded t) = t - Maybe t

safeSucc x = if isMaxBound x then Nothing else Just (succ x)

safePred x = if isMinBound x then Nothing else Just (pred x)



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Tests by properties: origin?

2012-06-05 Thread Richard O'Keefe
As far as I'm aware:
 - property-based testing wasn't new (think 'assertions' and then
   think 'branch coverage')
 - randomly generated test cases weren't new (look up 'fuzz testing')
   and there were tools like DGL to generate random test cases in a
   controlled sort of way
 + the *type-driven* approach making it nearly effortless to test
   a property once stated was new.

As soon as I met QuickCheck, I knew what it was for and how to use it.
The truly astonishing thing was how _easy_ it was to get started.  It
is true that other languages have since picked up the idea (like
Erlang), but without Haskell's type system to drive it, it's not nearly
so easy to get started.  The Haskell implementation of QuickCheck was a
couple of pages of code.  The first Erlang implementation is a serious
proprietary product.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Requesting Feedback: I Love Haskell, but can't find a place to use it

2012-05-31 Thread Richard O'Keefe
It's difficult to imagine any kind of program that doesn't need
testing; surely there is a role for Haskell in writing test data
generators?



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A functional programming solution for Mr and Mrs Hollingberry

2012-05-29 Thread Richard O'Keefe

On 30/05/2012, at 10:16 AM, Eric Rasmussen wrote:
 One idea (contrived and silly though it is) is modeling a Courier that 
 delivers message to Persons. There is a standard default reply for all 
 Persons, some individuals have their own default reply, and there are 
 conditional replies based on the sender. Each reply has the ability to alter 
 a Person's mood. The goal of the exercise would be to read in a CSV file in 
 the form of To, From, Message, and then output the interactions based on 
 rules.

Simulation is of course what Simula 67, the first well known object oriented 
language,
was devised for.  Simula 67 was also one of the first three languages I know of 
to
include concurrency as a standard feature, the others being PL/I and Algol 68.  
And
to an Erlang programmer, the obvious way to model a simulation is with one 
process
per entity.  It's also not accidental that Smalltalk has had concurrency since 
it
first appeared, and the simulation example in the Blue Book (and the simulation
library derived form it) makes essential use of concurrency.

In ML I would use the Concurrent ML facilities (supported in SML/NJ and MLton).
Using Haskell I would definitely consider simulation using concurrency.

And objects per se might not be all that useful.  In F# I would expect to use
threads but not classes.




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A functional programming solution for Mr and Mrs Hollingberry

2012-05-27 Thread Richard O'Keefe

On 26/05/2012, at 4:16 AM, David Turner wrote:
 
 I don't. I think the trouble is that classes don't add value in exercises of 
 this size.

This was the key point, I think.
In this example, there wasn't any significant behaviour that could be moved
to superclasses.  For that matter, whether a supplier is plain, preferred,
or problematic is, one hopes, not a *permanent* property of a supplier.

Sometimes higher-order functions can substitute for classes.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A functional programming solution for Mr and Mrs Hollingberry

2012-05-23 Thread Richard O'Keefe

On 21/05/2012, at 5:33 AM, Andreas Pauley wrote:
 With this in mind I've created a programming exercise where I imagine
 an OO programmer would use an object hierarchy with subtype
 polymorphism as part of the solution.

Being unfamiliar with git, I've submitted an AWK answer by e-mail.

I've used quite a few OO languages.  I like to think that I *am*
an OO programmer.  But this exercise struck me from the beginning
as something where classes would add nothing but bulk.  As a fan
of Smalltalk, I have to say that the Smalltalk version confirmed
this for me; a Smalltalk solution for this exercise could be a lot
smaller than that one if it _didn't_ introduce new classes.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can Haskell outperform C++?

2012-05-23 Thread Richard O'Keefe

On 24/05/2012, at 4:39 AM, Isaac Gouy wrote:

 From: Richard O'Keefe o...@cs.otago.ac.nz
 Sent: Tuesday, May 22, 2012 7:59 PM
 
 But string processing and text I/O using the java.io.* classes aren't 
 brilliant.
 
 Wait just a moment - Are you comparing text I/O for C programs that process 
 bytes against Java programs that process double-byte unicode?

No.  Amongst other things, I have my own ByteString and ByteStringBuilder
classes that are basically clones of String and StringBuilder, and using
them makes surprisingly little direct difference; the point is saving
memory.

I have obtained large speedups in Java using Java by dodging around the
Java libraries.  Other people have reported the same to me.

 With both of these changes, we are moving away from recommended good 
 practice;
 the faster code is the kind of code people are not supposed to write any 
 more.
 
 Says who? Is that on your own authority or some other source you can point us 
 to?

It looks increasingly as though there is no point in this discussion.
Is there ANY conceivable criticism of Java that will not elicit
ad hominem attacks from you?

I have read more Java textbooks than I wished to.  I was on Sun's
Java techniques and tips mailing list for years.  I could go on,
but is there, *really*, any point?
 
 These particular measurements were made using my own Smalltalk compiler
 which is an oddity amongst Smalltalks: a whole program compiler that compiles
 via C.  Yes, most of the good ideas came from INRIA, although ST/X does
 something not entirely dissimilar.
 
 Wait just a moment - you wrote I didn't _think_ I'd omitted anything 
 important and now it turns out that the measurements were made using your 
 personal Smalltalk implementation!
 
 You have got to be joking.

Why?  On various benchmarks, sometimes VisualWorks is better,
sometimes my system is better.  My system is utterly naive,
incorporating almost none of the classic Smalltalk optimisations.

I redid the test using VisualWorks NonCommercial.
It took about twice as long as my Smalltalk did.
According to 'TimeProfiler profile: [...]',
98% of the time is in the load phase; half of that
is down to the hash table.  A surprisingly small part
of the rest is due to actual input (ExternalReadStreamnext);
quite a bit goes into building strings and testing characters.

Why the difference?
With all due respect, VisualWorks still has the classic Smalltalk
implementation of hash tables.  Mine is different.  This is a
library issue, not a language issue.
One of the tasks in reading is skipping separators.
Since it's used a lot in parsing input, my library pushes that
right down to the bottom level of ReadStream and ChannelInputStream.
VisualWorks uses a single generic implementation that doesn't get
up to the low level dodges mine does.  And so on.

All *library* issues, not *compiler* or *language* issues.

Which is the whole point of this thread, as far as I am concerned.
C, Java, Smalltalk: this real example is dominated by *library*
level issues, not language issues or compiler issues.

 And it's not INTERESTING, and it's not about LANGUAGES.
 There is NOTHING about the Java language that makes code like this
 necessarily slow.  It's the LIBRARY.  The java.io library was
 designed for flexibility, not speed.  That's why there is a java.nio
 library.  
 
 Here's the gorilla in the room question - So why doesn't your program use 
 java.nio?
 
Because that would be insane.

This is a program I originally whipped up in less than an hour
for two reasons:

(A) I wanted to provide some students with an example of a
work-list algorithm that had some realism to it.
For that purpose, the program had to be READABLE.

(B) To my astonishment, the tsort(1) programs in OpenSolaris
and Mac OS X 10.6.8 turned out to be grotesquely slow for
non-toy graphs.  I was expecting to have a use for the
program myself, so as it stood, the Java version was
already quite fast enough to be useful.  (As in, a LOT
faster than the system version, even though the system
version was written in C.)

The one issue I had with the first version was not time, but
space, so I explored two ways of making it take less space.

There is no NEED to rewrite the program to use java.nio;
having replaced the system version of the command the Java
version was no longer the bottleneck in my intended use.

For me personally, having no experience with java.nio,
it was *easier* to rewrite the program from scratch in C
than to overcome the java.nio learning curve.  And in any
case, I knew very well that I could get near enough to the
same order of improvement using InputStream and wrapping
my own buffering code over that (I've done that before).
Above all, since the students were even less familiar with
nio than I am, using nio would have destroyed the program's
utility for purpose (A).

As for the Smalltalk version, I often rewrite small things
into Smalltalk in order to find out what I'm doing

Re: [Haskell-cafe] Can Haskell outperform C++?

2012-05-22 Thread Richard O'Keefe

On 23/05/2012, at 4:54 AM, Isaac Gouy wrote:
 There may be very little one can do about the I/O part.
 
 Maybe you could say how the Java I/O is being done.

 For 50,000 nodes and 8,385,254 edges,
 Java (first version) ran out of memory after 89.54 seconds (default heap)
 Java (2nd version)  13.31 seconds   -- avoid Integer boxing!
 Java (3rd version)  15.07 seconds
 Smalltalk   14.52 seconds
 C2.36 seconds
 
 fwiw my expectation is that Java would not be that much slower than C, and 
 would be considerably faster than Smalltalk.

My own experience is that Java is anywhere between 2 times slower and 150 times
slower than C.  This is not for number crunching; one _would_ expect Java to be
about the same as C for that because it is working at the same level as C.  But
string processing and text I/O using the java.io.* classes aren't brilliant.

   Reader r = new BufferedReader(new InputStreamReader(System.in));

This layers of wrappers approach to text I/O adds overheads of its own, but
less than I'd thought.  Using System.in directly takes the time down from
15.07 seconds to 11.11 seconds.  The code used Character.isWhitespace(c) to
test for a white space character; replacing that by c = ' ' brought the time
down further to 10.87 seconds.

With both of these changes, we are moving away from recommended good practice;
the faster code is the kind of code people are not supposed to write any more.

Characters are read one at a time using r.read().  There are no fast skip
characters in this set or take characters in this set and return them as
a string methods in the Reader or InputStream interfaces, and StreamTokenizer
reads characters one at a time using .read() also, so would be slower.

As for Smalltalk, you must be thinking of free Smalltalk systems that lacked a
JIT.  Commercial Smalltalks have excellent JITs (many HotSpot ideas were copied
from them) and there is now a pretty good one for the free systems Squeak and
Pharo.  These particular measurements were made using my own Smalltalk compiler
which is an oddity amongst Smalltalks: a whole program compiler that compiles
via C.  Yes, most of the good ideas came from INRIA, although ST/X does
something not entirely dissimilar.

 fwiw my expectation is that should be possible to make the Java program 
 considerably faster.

I have used profiling to find where the time was going.
Approximately 70% is spent in the one method that reads a word:
 - a loop skipping white space characters
 - a loop reading other characters and adding them to a StringBuilder
 - [looking the string up in a HashMap and creating and entering a
new Node if necessary -- this time is not part of that 70%].

Any reasonable person would expect file reading to be buffered and
for the read-one-byte method to usually just fiddle a few integers
and fetch an element of an array.  In fact the method is native, so
every character requires switching from the Java universe to the C
one and back.  (The Smalltalk equivalent is pretty close to fgetc().)

The only way to make this Java program 'considerably faster' is to
not read characters (or bytes) in the natural Java way.  (The way,
in fact, that java.io.StreamTokenizer does.)

And it's not INTERESTING, and it's not about LANGUAGES.
There is NOTHING about the Java language that makes code like this
necessarily slow.  It's the LIBRARY.  The java.io library was
designed for flexibility, not speed.  That's why there is a java.nio
library.  

Haskell is in pretty much the same boat.  I've always liked the
I/O model in the Haskell report.  I've expected simplicity from it,
not speed, and that's what I get.  But as all the new approaches
to I/O (like iteratees, which make my head hurt) make perfectly
clear, the limitations of the IO module are not limitations of
Haskell, or of JHC, but of a particular library.

And that's the point I was making with this example.  Why does
Smalltalk come out in the middle of the Java results?  A balance
between a language penalty (tagged integer arithmetic is a lot
slower than native integer arithmetic) and a library bonus (a
leaner meaner I/O design where there are wrappers if you want
them but you very seldom need them).  It's the great advantage
of using libraries rather than syntax: libraries can be changed.
 
 All I was saying is that surely the only *point* of
 talking about the performance of *languages* is to get some idea of
 how well programs are likely to do, and not any old specially crafted
 code, but the kind of code that is likely to be written for purposes
 other than benchmarking.  So the more you bash on a particular example
 to get the time down, the less representative it is of the kind of code
 people are likely to write *for purposes other than benchmarking*.
 
 Certainly less representative of the kind of code students

Leave the students out of it.  It's less representative of the kind of
code I see written by Sun 

Re: [Haskell-cafe] Can Haskell outperform C++?

2012-05-21 Thread Richard O'Keefe

On 22/05/2012, at 4:15 AM, Isaac Gouy wrote:
 Actually, I/O bound is *good*.
 
 Why would that be good or bad?

The context here is a UNIX-style topological sorting program.
Being I/O bound means that the program is limited by how fast
it can read the data.  If 90% of the time goes into reading
the data, that means that the *algorithmic* part is fast enough.

There may be very little one can do about the I/O part.

 I suppose you're just suggesting that, in this case, the performance 
 characteristics of the Java and Smalltalk programs are similar to the C 
 program; but, for whatever reason, you're leaving us guessing about the 
 timings for those other programs.

I didn't _think_ I'd omitted anything important.  Oh well.

For 25,000 nodes and 2,989,635 edges,
Java (first version) 4.83 seconds   -- uses ArrayList, HashMap
Java (2nd version)   4.17 seconds   -- uses own IntList
Java (3rd version)   4.09 seconds   -- different approach
Smalltalk4.40 seconds
C0.92 seconds   -- my code
Mac OS X tsort(1)  150.35 seconds   -- 11.84 user + 138.51 system

For 50,000 nodes and 8,385,254 edges,
Java (first version) ran out of memory after 89.54 seconds (default 
heap)
Java (2nd version)  13.31 seconds   -- avoid Integer boxing!
Java (3rd version)  15.07 seconds
Smalltalk   14.52 seconds
C2.36 seconds
Mac OS X tsort(1)  482.32 seconds   -- 41.55 user + 440.77 system

The Mac OS X tsort(1) is presumably written in C.  Symbols have been
stripped, so even with Instruments I can't see where the time is going.
Judging from its memory behaviour, I believe that most of the time is
going in the input phase, which suggests a spectacularly inefficient
symbol table, which suggests that it might be using a not-very-modified
version of the Unix V7 code, which used a linear search...

 Of course, to some degree, user defined hash functions remedy that specific 
 problem.
 
 While creating other, and perhaps worse, ones.
 
 For example, in the Smalltalk code, if you use a Dictionary of Strings,
 you're getting Robert Jenkin's hash function in optimised C.  If you
 supply your own, you're getting a very probably worse hash function
 and it's going to run rather slower.  And above all, the stuff you are
 benchmarking is no longer code that people are actually likely to write.
 
 I think you're being a touch quarrelsome :-)

That upset me.  All I was saying is that surely the only *point* of
talking about the performance of *languages* is to get some idea of
how well programs are likely to do, and not any old specially crafted
code, but the kind of code that is likely to be written for purposes
other than benchmarking.  So the more you bash on a particular example
to get the time down, the less representative it is of the kind of code
people are likely to write *for purposes other than benchmarking*.

Just today I marked a student program where their program took 15 minutes
and my model answer took a touch under 4 milliseconds.  I explained to
them _that_ their program was spectacularly slow.  They already knew _why_
it was.  I also explained the one trick (lazy initialisation) that could
have hugely improved the time.  I also told them that they had made the
right decision, to optimise *their* time, not the computer's, in their
particular context.

 Do *all* Smalltalk language implementations provide that same hash function 
 in optimised C?

*That* function, no.  *Some* hash function as a primitive (meaning
optimised C), well, I don't have every Smalltalk, but the ones I do
have, I've checked, and yes they do.
 
 Have Smalltalk language implementations *always* provided that same hash 
 function in optimised C?

Obviously not: Smalltalk-80 ran on Xerox D-machines, which didn't *have*
C.  Primitives were implemented in microcode.
 
 Have you researched what code people are actually likely to write, or are you 
 just speculating? ;-)

I'm in my 6th decade.  I've seen a lot of code in a lot of languages.
 
 It depends is the second best answer we can get.
 The best answer is one that says _what_ it depends on.
 
 That may be the best answer to some other question - but for the stated 
 question I think were looking for a Yes or a No :-)

The subject line has the obvious and boring answer yes, of course.

I recall one time when I wanted to analyse some data and typed up
Fortran code for a suitable technique from a book.  It didn't work.
After struggling to debug it for a while, I implemented the whole
thing from scratch in Prolog.  Then I went back to the Fortran
code, spotted my typing mistake, and fixed it.  Result, two working
programs.  The Prolog program (compiled to a VM which was emulated;
the compiler was not very clever) ran faster than the Fortran program
(compiled to native code by a seriously optimising compiler from a
major computer manufacturer).

Reason?  

Re: [Haskell-cafe] Can Haskell outperform C++?

2012-05-20 Thread Richard O'Keefe

On 19/05/2012, at 5:51 AM, Isaac Gouy wrote:
 In the 'tsort' case, it turns out that the Java and Smalltalk
 versions are I/O bound with over 90% of the time spent just
 reading the data. 
 
 My guess is that they could be written to do better than that - but it's 
 idiotic of me to say so without understanding the specifics, please 
 forgive me ;-)

Actually, I/O bound is *good*.

Here's the times from the C version, which has been hacked hard in order
to be as fast as I could make it.

 N total  input  processoutput
 1000; 0.004618 = 0.004107 + 0.000438 + 0.73
 2000; 0.014467 = 0.012722 + 0.001609 + 0.000136
 5000; 0.059810 = 0.051308 + 0.008199 + 0.000303
1; 0.204111 = 0.150638 + 0.052800 + 0.000673
2; 0.717362 = 0.518343 + 0.197655 + 0.001364
5; 3.517340 = 2.628550 + 0.885456 + 0.003331

N here is the number of nodes in the graph;
the number of edges is floor(N**1.5 * 0.75).
Input is the read-word + look up in hash table time.
Process is the compute-the-transitive-closure time.
Output is the time to write the node names in order.
All node names had the form x## with ## being 1..1.
This is with my own low level code; using scanf(%...s)
pushed the input time up by 40%.

The Mac OS X version of the tsort command took
31.65 CPU seconds for N=10,000, of which
28.74 CPU seconds was 'system'.

Like I said, the languages I used in this test
 ... have I/O libraries with very different
 structures, so what does identical algorithms mean?  If you
 are using dictionaries/hashmaps, and the two languages have
 implementations that compute different hash functions for strings,
 is _that_ using the same implementation? 
 
 Of course, to some degree, user defined hash functions remedy that specific 
 problem.

While creating other, and perhaps worse, ones.

For example, in the Smalltalk code, if you use a Dictionary of Strings,
you're getting Robert Jenkin's hash function in optimised C.  If you
supply your own, you're getting a very probably worse hash function
and it's going to run rather slower.  And above all, the stuff you are
benchmarking is no longer code that people are actually likely to write.

 But we're still going to ask - Will my program be faster if I write it in 
 language X? - and we're 
 still going to wish for a simpler answer than - It depends how you write it!

Here's another little example.  I had a use for the Singular Value Decomposition
in a Java program.  Should I use pure Java or native C?

Pure Java taken straight off the CD-ROM that came with a large
book of numerical algorithms in Java:   T seconds.

After noticing that the code was just warmed over Fortran, and was
varying the leftmost subscript fastest (which is good for Fortran,
bad for most other languages) and swapping all the matrix dimensions: T/2 
seconds.

After rewriting in C:  T/4 seconds.

After rewriting the C code to call the appropriate BLAS
and thereby using tuned code for the hardware, T/7 seconds.

Since this was going to take hundreds of seconds per run, the answer was easy.

A simple little thing like using a[i][j] vs a[j][i] made a
factor of 2 difference to the overall speed.

It depends is the second best answer we can get.
The best answer is one that says _what_ it depends on.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can Haskell outperform C++?

2012-05-20 Thread Richard O'Keefe

 How much is hard to port a haskell program to C ?
 If it will become harder and harder, (i.e. for parallelizations) than
 it's fair to choose haskell for performance, but if it's not, I think
 it's hard to think that such a high level language could ever compile
 down to something running faster than its port to C.

There is a logic programming language called Mercury;
it has strict polymorphic types and strict modes and it supports
functional syntax as well as Horn clause syntax.  You could think
of it as 'strict Clean with unification'.

In the early days, they had a list processing benchmark where
the idiomatic Mercury version of the program was faster than
the idiomatic C version of the program, despite the fact that
at the time Mercury was compiling via C.

The answer was that the kind of C code being generated by Mercury
was not the kind of code any sane programmer would ever have written
by hand.  It really does depend on how you write it.

 Will hardware really go for hundreds of cores ?

You can already buy a 700 core machine (I have _no_ idea how many
chips are involved in that) for Java.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can Haskell outperform C++?

2012-05-17 Thread Richard O'Keefe

On 17/05/2012, at 10:07 PM, Roman Werpachowski wrote:
 No slide deck required.  The task is generating alternating permutations.
 
 Method 1: generate permutations using a backtracking search;
  when a permutation is generated, check if it is alternating.
 
 Method 2: use the same backtracking search, but only allow extensions
  that preserve the property of being an alternating permutation.
 
 Gregg,
 
 what makes Method 2 so much harder than Method 1 to implement in C or C++?


It was me, not Gregg.  There was and is no claim that method 2 is much harder
to implement in C or C++.  In fact both methods *were* implemented easily in C.
The claim was and remains solely that
THE TIME DIFFERENCE BETWEEN *ALGORITHMS*
   can be bigger than
THE TIME DIFFERENCE BETWEEN *LANGUAGES*
   and is in this particular case.

(And that's despite the fact that the C version kept the set of unused
elements as a one-native-word bit mask, while the Prolog version kept it
as a linked list.) 

There is also a second claim, which I thought was uncontentious:
the relative difficulty of tasks varies with language.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can Haskell outperform C++?

2012-05-16 Thread Richard O'Keefe
In a lecture today I presented (for quite other reasons) a simple combinatorial 
enumeration problem where the difference between two algorithms was far larger 
than any plausible difference between programming languages.  If a programming 
language makes it easier to explore high level alternative algorithms, you may 
very easily end up with faster programs in a slower language.  (Sorry about 
the very long line: broken return key.)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can Haskell outperform C++?

2012-05-16 Thread Richard O'Keefe
On 17/05/2012, at 2:04 PM, Gregg Lebovitz wrote:

 Richard,
 
 Thank you. This is an example of what I had in mind when I talked about 
 changing the playing field. Do you have a slide deck for this lecture that 
 you would be willing to share with me? I am very interested in learning more.


No slide deck required.  The task is generating alternating permutations.

Method 1: generate permutations using a backtracking search;
  when a permutation is generated, check if it is alternating.

Method 2: use the same backtracking search, but only allow extensions
  that preserve the property of being an alternating permutation.

For n=12 the second method is 94 times faster than the first, and the
ratio increases with growing n.  At the time I wrote the program I had not
heard of Bauslaugh and Ruskey's constant-average-time algorithm for generating
alternating permutations.  Experimentally the Method 2 backtracking search
appears to take constant time per solution anyway.

 n (time ms):   En n!

 1 (0.0):1  1  - method 1
 1 (0.0):1 - method 2
 2 (0.0):1  2
 2 (0.0):1
 3 (0.0):2  6
 3 (0.0):2
 4 (0.0):5 24
 4 (0.0):5
 5 (0.0):   16120
 5 (0.0):   16
 6 (0.1):   61720
 6 (0.0):   61
 7 (0.6):  272   5040
 7 (0.1):  272
 8 (4.4): 1385  40320
 8 (0.3): 1385
 9 (   35.2): 7936 362880
 9 (1.4): 7936
10 (  359.7):505213628800
10 (9.3):50521
11 ( 4035.7):   353792   39916800
11 (   67.0):   353792
12 (49584.6):  2702765  479001600  - method 1
12 (  528.1):  2702765 - method 2

Those times are in C; SWI Prolog (which compiles to an abstract instruction
set that is then interpreted by portable C) was about 24 times slower.
A factor of 94 comfortably exceeds a factor of 24...



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] MonadError vs Control.Exception

2012-05-06 Thread Richard O'Keefe

On 6/05/2012, at 8:09 AM, Albert Y. C. Lai wrote:
 Exceptions appeared as early as in ML and Ada in the 1980s,

Oh they go back long before that.  PL/I had them in the 1960s
(manual 1965, compiler 1966) and Burroughs Extended Algol for
the B6700 had them in the 1970s (maybe already in 1969 when
the B6500 was released).  CLU had them in the late 70s.
The exception handling in CLU paper is 1979 and says
In referring to the condition as exceptions rather than errors
 we are following Goodenough.  The term exception is chosen
 because, unlike the term error, it does not imply that
 anything is wrong;

The 'jumpout' function in Pop2 was basically an early adoption
of the idea of continuations via Landin's J-functions; it was
used as an exception handling mechanism and that was in the
early 70s.

 Exceptions first appeared under another name in the paper of David Parnas and 
 Harald Würges response to undesired events in software systems in ICSE 1976.

Since several programming languages offered exception handling
of one sort or another well before that, that *cannot* be the
first appearance of exceptions.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] MonadError vs Control.Exception

2012-05-06 Thread Richard O'Keefe

On 7/05/2012, at 10:17 AM, Richard O'Keefe wrote:
 The 'jumpout' function in Pop2 was basically an early adoption
 of the idea of continuations via Landin's J-functions; it was
 used as an exception handling mechanism and that was in the
 early 70s.

Correction: apparently that should read 'in 1968'.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Learn you

2012-05-02 Thread Richard O'Keefe

On 3/05/2012, at 5:18 AM, Brent Yorgey wrote:

 I am curious how the title was translated.  Of course, the English
 title Learn You a Haskell for Great Good uses intentionally
 ungrammatical/unidiomatic English for humorous effect.  Is the
 Japanese title also ungrammatical/unidiomatic Japanese?  Or do
 Japanese speakers not find that humorous?

This native speaker of English doesn't find the effect
of the English title funny, despite finding practically
everything in the world, up to and including a bout of
kidney stones, funny.  Humour styles really don't travel
all that well.  The Little Lisper (and the other books
like The Little Schemer and The Seasoned Schemer) are
presumably meant to be funny, but to me come across as
offensively patronising (you are such a drooling idiot
that if we didn't have this heavy laugh track you'd go
to sleep or something).  Such a pity, because they are
such great books if you can refrain from throwing them
out the window every few minutes.

Now if the Japanese title were *perfect* Japanese,
that *would* be funny, because it would be a case
of a good translation being a bad translation.

I did say that humour doesn't travel well...


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] JSON library suggestions?

2012-04-25 Thread Richard O'Keefe

On 25/04/2012, at 9:51 AM, Alvaro Gutierrez wrote:
 For that reason, most standard (fixed size/binary) numeric types like double 
 are a poor choice to contain numeric values specified in JSON; in particular, 
 the mismatch means that conversion can be lossy in both directions.

Note that the conversion *IS* lossy in practice.
If you send a JSON message to a Javascript program,
or a Python program, or a Go program (if I am reading 
src/pkg/encoding/json/decode.go
correctly) what you get will be a 64-bit float.
The Jackson parser for Java uses Double for numbers with a '.' or 'e' by 
default,
although it can be configured to use BigDecimal.

If you want numbers outside the domain of finite 64-bit floats to travel
unscathed through JSON, then you must control not only which languages are
used at each end, but which versions of which libraries and how configured.

I argued the other end of this in the mailing list for another language,
saying that I wanted things that look like integers to be decoded as integers,
and was stepped on hard.  Some people found their programs much simpler if
they always got the same kind of Number whatever the input looked like (in
Jackson, a Number might be returned as an instance of any of five classes).



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: signed-multiset-0.1

2012-04-19 Thread Richard O'Keefe
Signed multisets are unfamiliar to most of us, and I for one
found the paper a little fast-paced.  Can you put a bit more
into the documentation?  Just for starters, I found it
confusing when the documentation talks about an element with
multiplicity zero, because in the _paper_ a value that has
multiplicity zero in an mzset is NOT an element of it.

For a specific example, I haven't the faintest intuition about
what 'map' should do.  Suppose we have
{(k1)x1, (k2)x2}
and f x1 == f x2 = y.  Should the value of map f {...} be
{(k1+k2)y} or {(k1`max`k2)y} or what?


I think anyone is going to be confused that a non-empty
mzset can have 'cardinality' 0, so a little more reassurance
that cardinality/=size is intentional might not go amiss.
Perhaps an additional name such as totalMultiplicity could
make things a little easier for writers and readers, even
though cardinality is correct.

Are you _sure_ that the definitions of isSubmultisetOf and
isProperSubmultisetOf are correct?  Proper subthing is
normally taken as requiring that only *one* member of the
larger collection occur strictly fewer times in the smaller;
here you require that for *all*.

At least once, multiplicities is spelled multiplicites.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reconstructing a tree from a list of its paths (to leaves)

2012-04-10 Thread Richard O'Keefe

On 10/04/2012, at 7:55 PM, Arnaud Bailly wrote:
 I am manipulating labeled multiway trees, some kind of lightweight
 XML notation. One thing I would like to be able to do is manipulating
 a tree as a list of (Path, Value). Generating such a list is easy but
 I am a little bit surprised to find it harder to reconstruct a tree,
 given such a list assuming some sensible properties (list is ordered,
 for example).


Given a tree, there is a unique set of (Path, Value) pairs.
Given a set of (Path, Value) pairs, there might be no trees,
one, or infinitely many.

For example, suppose we have

data XM = XE String [XM] | XL String

as a simplification of XML and

paths :: XM - [([Int],String)]

paths (XL s)= [([], s)]
paths (XE _ ks) = [(i:p,v) | (i,k) - zip [1..] ks, (p,v) - paths k]

as the function to reduce a tree to a list of (path,value) pairs.


paths (XE foo [XE bar [XL zabbo], XE ugh [XL troppo]])
==
[([1,1],zabbo),([2,1],troppo)]

in which foo, bar, and ugh have been irretrievably lost.

So you need to be rather more explicit about your sensible properties.
(The list being ordered is not one of them; sorting is a solved problem.)




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reconstructing a tree from a list of its paths (to leaves)

2012-04-10 Thread Richard O'Keefe

On 11/04/2012, at 4:23 PM, Arnaud Bailly wrote:

 You are right, of course. By sensible properties I simply meant the
 list of (Path, Value) is assumed to represent a tree (eg. it has been
 generated by a traversal of some original tree). By ordered I meant
 Path(s) segments are lexicographically ordered and (Path, Value) are
 enumerated from a tree using depth-first traversal.

My main point was that there is a sensible property that you did not
mention, and you still have not mentioned it, namely that
*ALL* non-structural information about a node must be represented in
the Value that corresponds to it (and of course, that all the nodes
are actually listed somewhere).

Given your constraints, the sequence of (Path,Value) pairs must begin
with a ([],Value) representing the non-structural information about the
root, then a sequence of (1:x11,v11) ... (1:x1k,v1k)  (n:xn1,vn1)
... (n:xnm,vnm) pairs which you partition as
(x11,v11) ... (x1k,v1k) ... (xn1,vn1) ... (xnm,vnm)
and then process recursively to make the children.

The initial lexicographic ordering is _not_ an important property because
you can ensure that by sorting.  As long as the paths are numbered correctly,
the kind of traversal that was used to generate the initial list is not
important either.  But the no-missing-nodes and no-missing-non-structural-
information properties are crucial.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] adding the elements of two lists

2012-03-28 Thread Richard O'Keefe

On 29/03/2012, at 3:08 PM, Doug McIlroy wrote:
 - without newtype
 
 toSeries f = f : repeat 0   -- coerce scalar to series
 
 instance Num a = Num [a] where
   (f:fs) + (g:gs) = f+g : fs+gs
   (f:fs') * gs@(g:gs') = f*g : fs'*gs + (toSeries f)*gs'
 
 - with newtype
 
 newtype Num a = PS a = PS [a] deriving (Show, Eq)
 
 fromPS (PS fs) = fs -- extract list
 toPS f = PS (f : repeat 0)  -- coerce scalar to series
 
 instance Num a = Num (PS a) where
   (PS (f:fs)) + (PS (g:gs)) = PS (f+g : fs+gs)
   (PS (f:fs)) * gPS@(PS (g:gs)) =
  PS $ f*g : fromPS ((PS fs)*gPS + (toPS f)*(PS gs))

Try it again.

newtype PS a = PS [a] deriving (Eq, Show)

u f (PS x)= PS $ map f x
b f (PS x) (PS y) = PS $ zipWith f x y
to_ps x   = PS (x : repeat 0)

ps_product (f:fs) (g:gs) = whatever

instance Num a = Num (PS a)
  where
(+) = b (+)
(-) = b (-)
(*) = b ps_product
negate  = u negate
abs = u abs
signum  = u signum
fromInteger = to_ps . fromInteger

I've avoided defining ps_product because I'm not sure what
it is supposed to do: the definition doesn't look commutative.

 The code suffers a pox of PS.

But it doesn't *need* to.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Mathematics and Statistics libraries

2012-03-26 Thread Richard O'Keefe

On 26/03/2012, at 8:35 PM, Ketil Malde wrote:
 Just to clarify (since I think the original suggestion was mine), I
 don't want to copy R's data frame (which I never quite understood,
 anyway)

A data.frame is
 - a record of vectors all the same length
 - which can be sliced and diced like a 2d matrix

It's not unlike an SQL table (think of a column-oriented data base
so a table is really a collection of named columns, but it _looks_
like a collection of rows).


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] adding the elements of two lists

2012-03-26 Thread Richard O'Keefe

On 27/03/2012, at 5:18 AM, Jerzy Karczmarczuk wrote:

 But I don't care about using (+) = zipWith (+) anywhere, outside of a 
 programming model / framework, where you keep the sanity of your data. In my 
 programs I KNEW that the length of the list is either fixed, or of some 
 minimal size (or infinite). Your [4,5] simply does not belong to MY rings, if 
 I decided to keep the other one.

And *that* is why I stopped trying to define instance Num t = Num [t].
If I KNEW that the length of the lists is ... fixed ... then the type
wasn't *really* [t], but some abstract type that happened to be implemented
as [t], and that abstract type deserved a newtype name of its own.

Naming the type
 - makes the author's intent clearer to readers
 - lets the compiler check it's used consistently
 - lets you have instances that don't match instances for
   other abstract types that happen to have the same implementation
 - provides a natural place to document the purpose of the type
 - gives you a way to enforce the intended restrictions
all for zero run-time overhead.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] adding the elements of two lists

2012-03-25 Thread Richard O'Keefe

On 26/03/2012, at 1:01 AM, TP wrote:

 Hello,
 
 My primary problem may be reduced to adding elements of two lists:
 [1,2,3] + [4,5,6] = [5,7,9]

zipWith (+) [1,2,3] [4,5,6]
gets the job done.
 
 However, it seems it is not possible to do that:
 
 ---
 instance Num [Int] where
   l1 + l2 = 
 ---
 
 Why?

Because the 'instance' machinery is keyed off the *outermost* type
constructor (here []) not the *whole* type (here [Int]) and the
reason for that is polymorphism; we want to be able to work with
[t] where t is not specially constrained.

You *can* do
instance (Num t) = Num [t] where ...

 It seems it is necessary to do:
 
 --
 newtype ListOfInt = ListOfInt { getList :: [Int] }

That's *still* a good idea because there are lots of different
things that arithmetic on lists might mean.  For example,
is [1,2] + [3,4,5] an error or not?  If it is not an error,
what actually happens?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] adding the elements of two lists

2012-03-25 Thread Richard O'Keefe

On 26/03/2012, at 12:51 PM, Chris Smith wrote:
 More concretely, it's not hard to see that the additive identity is 
 [0,0,0...], the infinite list of zeros.  But if you have a finite list x, 
 then x - x is NOT equal to that additive identity!

Said another way:  if you do want [num] to support + and -, then you DON'T
want the definitions of + and - to be unthinking applications of zipWith.

The approach I took the time I did this (before I learned better) was this:

smart_cons :: Num t = t - [t] - [t]
smart_cons 0 [] = []
smart_cons x xs = x : xs

instance Num t = Num [t]
  where  (x:xs) + (y:ys) = smart_cons (x+y) (xs + ys)
 xs + [] = xs
 [] + ys = ys
 ...
fromInteger 0 = []
fromInteger n = [n]
...

so that a finite list acted _as if_ it was followed by infinitely many zeros.
Of course this wasn't right either: if the inputs don't have trailing zeroes,
neither do the outputs, but if they _do_ have trailing zeros, [0]+[] = [0]
when it should = [].  That was about the time I realised this was a bad idea.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are there arithmetic composition of functions?

2012-03-20 Thread Richard O'Keefe

On 21/03/2012, at 2:14 AM, Jerzy Karczmarczuk wrote:
 
 The existence of standards is not an answer concerning their goodness.

Whoever said it was?  Not me!
But the existence of implementations that conform to standards
*IS* an answer concerning 'will this WORK?'

I do appreciate that the latest and greatest version of 'base'
can do all sorts of things.  However, I _don't_ know how to
install that so that UHC and GHC will both use it.

I'm no different from all the other Haskellers who've been
frustrated by Num wanting Eq and Show, which I would have thought
was an obviously bad idea from the beginning.  I have my own
potential uses for Eq-less Nums.  But I want it to *work* and to
work in more than one Haskell compiler.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are there arithmetic composition of functions?

2012-03-20 Thread Richard O'Keefe

On 21/03/2012, at 9:06 AM, Albert Y. C. Lai wrote:

 On 12-03-19 10:05 PM, Richard O'Keefe wrote:
 http://www.haskell.org/onlinereport/haskell2010/haskellch9.html#x16-1710009
 
 Haskell 2010 is already beginning to be out of date.

Was there any point in me pointing out that the latest release of a
well known Haskell compiler downloaded and installed YESTERDAY still
conformed to Haskell 2010, not this change?

Was there any point in me pointing out that the latest release of the
Haskell Platform downloaded YESTERDAY still conformed to Haskell 2010,
not this change?

Or was I shouting into the wind?

The change is a Good Thing.  No disagreement there.
The latest GHC supports it, and that's a Good Thing.
No disagreement there.
Some time there will be a new version of the Haskell Platform
incorporating new versions of the library and compiler,
and that will be a Good Thing too.

The point I was making remains valid:  right NOW, using current releases
of things other than GHC, the odds are that you will have to manually
upgrade your system, and
(a) you might not know how to do that (as I don't know how to upgrade
UHC), and
(b) until the change is more widely adopted, your shiny new code will
work for some people but not others.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are there arithmetic composition of functions?

2012-03-19 Thread Richard O'Keefe
One problem with hooking functions into the Haskell numeric
classes is right at the beginning:

class (Eq a, Show a) = Num a
  where (+) (-) (*) negate abs signum fromInteger

where functions are for good reason not members of Eq or Show.
Look at

http://www.haskell.org/haskellwiki/Numeric_Prelude

for a different set of numeric classes that should suit you
better.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Google Summer of Code idea of project application

2012-03-19 Thread Richard O'Keefe
On scoping the project: be clear about the actual goal.
If you want to take existing Haskell libraries and use them
in OCaml, then you pretty much have to deal with the full
language.  You should start by using as much as you can of
an existing compiler, or by getting an unmodified compiler
to convert source code to Core syntax.

As just one example, a recent thread concerned implementing
lock-free containers.  I don't expect converting one of those
to OCaml to be easy...

If, however, you want to make it possible for someone to
write code in a sublanguage of Haskell that is acceptable
to a Haskell compiler and convert just *that* to OCaml, you
might be able to produce something useful much quicker.

It has long been my opinion that the TV series Butt-Ugly
Martians was inspired by OCaml syntax.  I keep being attracted
by other features of OCaml, but unable to bring myself to use
its syntax.  (I have much the same problem with F#, and if
SML.NET -- www.cl.cam.ac.uk/research/tsg/SMLNET/ -- had been
updated in the last nearly 6 years I would not be looking at
F# at all.)  Never mind popularising Haskell to OCaml users;
this could be a way of popularising OCaml to Haskell users.

One of the reasons Harrop gives for preferring F# to OCaml is
that F# supports true multicore concurrency, while OCaml
apparently still doesn't.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are there arithmetic composition of functions?

2012-03-19 Thread Richard O'Keefe

On 20/03/2012, at 2:21 PM, Chris Smith wrote:

 On Mon, Mar 19, 2012 at 7:16 PM, Richard O'Keefe o...@cs.otago.ac.nz wrote:
 One problem with hooking functions into the Haskell numeric
 classes is right at the beginning:
 
class (Eq a, Show a) = Num a
 
 This is true in base 4.4, but is no longer true in base 4.5.

I didn't say GHC, I said Haskell.

class  (Eq a, Show a) = Num a  where  
(+), (-), (⋆):: a - a - a  
negate   :: a - a  
abs, signum  :: a - a  
fromInteger  :: Integer - a  
 
-- Minimal complete definition:  
--  All, except negate or (-)  
x - y=  x + negate y  
negate x =  0 - x

comes straight from the Haskell 2010 report:

http://www.haskell.org/onlinereport/haskell2010/haskellch9.html#x16-1710009

There are other Haskell compilers than GHC.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are there arithmetic composition of functions?

2012-03-19 Thread Richard O'Keefe

On 20/03/2012, at 2:27 PM, Jerzy Karczmarczuk wrote:

 Richard O'Keefe:
 class (Eq a, Show a) = Num a
   where (+) (-) (*) negate abs signum fromInteger
 
 where functions are for good reason not members of Eq or Show.
 
 This is an old song, changed several times. I have no intention to discuss, 
 but please, Richard O'Keefe:
 WHICH GOOD REASONS??

It is still there in the Haskell 2010 report.

The UHC user manual at
http://www.cs.uu.nl/groups/ST/Projects/ehc/ehc-user-doc.pdf
lists differences between UHC and both Haskell 98 and
Haskell 2010, but is completely silent about any change to
the interface of class Num, and in fact compiling a test
program that does 'instance Num Foo' where Foo is *not* an
instance of Eq or Show gives me this response:

[1/1] Compiling Haskell  mynum  (mynum.hs)
EH analyses: Type checking
mynum.hs:3-11:
  Predicates remain unproven:
preds: UHC.Base.Eq mynum.Foo: 


This is with ehc-1.1.3, Revision 2422:2426M,
the latest binary release, downloaded and installed today.
The release date was the 31st of January this year.

GHC 7.0.3 doesn't like it either.  I know ghc 7.4.1 is
out, but I use the Haskell Platform, and the currently
shipping version says plainly at
http://hackage.haskell.org/platform/contents.html
that it provides GHC 7.0.4.

You may have no intention of discussing the issue,
but it seems to *me* that this will not work in 2012
Haskell compiler mostly conforming to Haskell 2010
because Haskell 2010 says it shouldn't work is a pretty
sound position to take.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Google Summer of Code idea of project application

2012-03-18 Thread Richard O'Keefe

On 19/03/2012, at 8:01 AM, Damien Desfontaines wrote:
 The project I suggest is mainly inspired by Ticket #1555 [1] : I think that
 would be a great idea to make it possible to call some Haskell code into 
 OCamL.
 In particular, this would contribute to the spreading of Haskell in countries
 where OCamL is proeminent, mainly France and Italy. The idea would be the
 following : building a translator which would turn Haskell code into (purely
 functional) OCamL code, in order to enable the use of Haskell functions and
 libraries within OCamL programs, in a human-readable way (the OCamL source
 code generated would ideally be understandable enough to be manually 
 modified).

You might want to consider targeting F# as well as (or instead of) OCaml.
I've had nothing but trouble with GODI, to the point where I gave up on
OCaml entirely.  On the other hand, F# came with Mono...

F# has built-in support for lazy evaluation (although it is not the default),
so this might simplify your task.  Indeed, F# has comprehensions too, so the
main impedance mismatch would be typeclasses.  This would make an F# target
a sensible half-way point for an OCaml target.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Installing gloss

2012-02-28 Thread Richard O'Keefe
Gloss having been mentioned, I thought I'd look into it.

m% cabal install gloss
Resolving dependencies...
cabal: cannot configure gloss-1.6.1.1. It requires base ==4.5.*
For the dependency on base ==4.5.* there are these packages: base-4.5.0.0.
However none of them are available.
base-4.5.0.0 was excluded because of the top level dependency base -any

What did I do wrong?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell development in Mac OS X after Gatekeeper

2012-02-20 Thread Richard O'Keefe

On 20/02/2012, at 5:53 PM, Brandon Allbery wrote:

 On Sun, Feb 19, 2012 at 23:27, Richard O'Keefe o...@cs.otago.ac.nz wrote:
 Now *that's* annoying.  It turns out that the xattr command is *there*,
 but 'man xattr' is completely silent.  There is nothing for it in
 /usr/share/man/man1 .  I had been using my own command to do the
 equivalent of xattr -d.
 
 Bzuh?
 
 haral:23276 Z$ pkgutil --file-info /usr/share/man/man1/xattr.1
 volume: /
 path: /usr/share/man/man1/xattr.1
 
 pkgid: com.apple.pkg.BSD
 pkg-version: 10.7.0.1.1.1293150744
 install-time: 1310114676
 uid: 0
 gid: 0
 mode: 644

m% uname -a
Darwin deleted 10.8.0 Darwin Kernel Version 10.8.0: Tue Jun  7 16:33:36 PDT 
2011; root:xnu-1504.15.3~1/RELEASE_I386 i386
m% ls -l /usr/share/man/man1/xa*
-rw-r--r--  1 root  wheel  5427 14 Jul  2009 /usr/share/man/man1/xar.1
-r--r--r--  1 root  wheel  3759 19 May  2009 /usr/share/man/man1/xargs.1.gz
m% pkgutil --file-info /usr/share/man/man1/xattr.1
2012-02-21 10:25:36.201 pkgutil[7023:903] PackageKit: *** Missing bundle 
identifier: /Library/Receipts/NeoOffice-2.2.2-Intel.pkg
2012-02-21 10:25:36.222 pkgutil[7023:903] PackageKit: *** Missing bundle 
identifier: /Library/Receipts/NeoOffice-3.0.1-Intel.pkg
volume: /
path: /usr/share/man/man1/xattr.1
m%

Since you are running Lion and I am not, it isn't _that_ surprising that we
see different things.  It remains surprising that in 10.6.8 the xattr
command is there but its manual page is not.

I've also checked a laptop running the same release of Mac OS X.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell development in Mac OS X after Gatekeeper

2012-02-19 Thread Richard O'Keefe

On 20/02/2012, at 1:01 PM, Tom Murphy wrote:

 Does anyone know what this will mean for the future of Haskell
 development in OS X?:
 
 http://www.apple.com/macosx/mountain-lion/security.html

Quoting that document:
Or you can install all apps from anywhere,
just as you can today.  You can even temporarily
override your setting by Control-clicking, and
install any app at any time. Gatekeeper leaves it all up to you.

So in the *short* term, it makes little difference.

 1) Writing software for widespread use (a security setting is to only
 run software from the App Store, and I'd like to have my software
 function on users' computers.)

*Short* term, the most that will happen is that people will have to
say yeah yeah I know just let me have it OK?.

Already in 10.6 there was this nagging feature where you click on a
downloaded document and it says this was downloaded, do you really
want to open it and it takes a painfully long time bouncing in the
dock before that dialogue box comes up.

Heck, I have to provide an administrator account  password when I
want to run GDB in my own directory on my own program.  (And you
have to love the way they removed the MacOS equivalent of truss
because it was superceded by DTrace, but you have to be superuser
to use DTrace...)

*Short* term, it's just more continuing low-level harassment by
Apple (even if perhaps with good intentions).  Long term, well,
there's a reason why I keep Solaris, Linux, and OpenBSD around...



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell development in Mac OS X after Gatekeeper

2012-02-19 Thread Richard O'Keefe

On 20/02/2012, at 3:04 PM, Jack Henahan wrote:
 
 What's your setup like that you can't even use gdb in your own directory? 
 That sounds unusual. And you can turn off the warning, either globally or 
 selectively.[3][4]

My setup is Mac OS X 10.6.8, pretty much out of the box, plus a bunch of 
additional
stuff, but nothing removed.  The invariable University policy is that *nobody* 
except
a few designated system administrators is allowed to have root access on any 
machine
connected to the University's ethernets.  (Apparently nobody has told them about
VirtualBox yet, so I can happily root access Solaris, Linux, and OpenBSD on my 
Macs.)
So
 - there's the root account,
 - there's an admin account just for me, which lets me turn the printer on
   and install software, but not run DTrace, and
 - there's my ordinary user account.

I can run gdb just fine, it's setting a breakpoint and then trying to run the
program that it doesn't like.  I have to re-authenticate for this every time I
log in.
 
 [3]: 
 http://osxdaily.com/2010/03/29/disable-the-are-you-sure-you-want-to-open-this-file-warning-dialogue-in-mac-os-x/

Thank you.  I did not know about that.  I have been working around it by
deleting the com.apple.quarantine xattr.

 [4]: 
 http://osxdaily.com/2010/09/12/disable-application-downloaded-from-the-internet-message-in-mac-os-x/

Now *that's* annoying.  It turns out that the xattr command is *there*,
but 'man xattr' is completely silent.  There is nothing for it in
/usr/share/man/man1 .  I had been using my own command to do the
equivalent of xattr -d.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-02-08 Thread Richard O'Keefe

On 9/02/2012, at 3:16 AM, Steve Horne wrote:

 On 07/02/2012 22:56, Richard O'Keefe wrote:
 On 8/02/2012, at 2:11 AM, Steve Horne wrote:
 
 
 To be fair, field OF record isn't bad in that sense. However, it would 
 defeat the purpose of TDNR - the record isn't first, and therefore cannot 
 be used (given a left-to-right typing direction) as a context to offer 
 member name suggestions.
 
 Yes, but why SHOULD there be a specific typing direction?
 ML manages perfectly fine without it.
 
 For the only reason that any language feature should exist - because it is 
 useful. In any language with a rich library, it is useful to get hints as to 
 which names are available in a particular context. It saves on the need to 
 memorize thousands - sometimes tens or even hundreds of thousands - of 
 context-sensitive names and their spellings, and saves on getting distracted 
 needing to hunt through manuals.

You have totally confused me.  All of those are good things.
NONE of them depends on whether it is field¶record (read
field OF record) or record.field (read record, oops, I
only want part of it.)

I think people are losing sight of the fact that code gets
read more often than it gets written (at least, if it is code
that is _worth_ writing).

If the complaint is that certain IDEs designed originally for
Java find it easier to give you a hint after record., then
I would point out that

 - there is no reason IDEs they cannot be redesigned.
   Type an expression, then select it if it's complex
   or don't bother if it's just an identifier, literal,
   or bracketed, then
   hit your choice of key (maybe Option-r, ® Reminds me of
   Record), pick your field from a menu, and the IDE
   drops field¶ in front of the selected expression and
   extends the selection to incorporate the field.
   There is no law of God, Nature, or Man that says the
   order in which you press the keys has to correspond
   to the order in which you read things.

 - languages like C++ and Ada and Java already have the
   problem that you can write f (x) where the sensible
   candidates for f depend on what x is.  That is, we
   ALREADY have a need for right context to resolve a
   left side identifier.  Hmm; I was thinking of overloading,
   but actually, Haskell and C have this problem too.
   For int x I want close(x) but for FILE* x I want fclose(x).
   You could write in a C IDE (x, y, z)magic key (hey, it
   could be © for Call) and have a menu of visible functions
   with that parameter profile pop up.

 - if you have thousands of context-sensitive identifiers
   visible in one module, you *desperately* need a better
   naming convention and shorter import lists.

 - I have Pharo open on my screen.  There are some 3077
   classes in it.  It insists on popping up these so-called
   helpful menus of names that match what I've typed so far.
   I find them distracting, and they tend to obscure what I
   am doing.  I *wish* they didn't do that.  But I have to
   admit that I've never actually seen a long list.  There
   are 30,674 'function names' around (both of the numbers
   are before any of my code is loaded).  Again, I start
   typing something that could be a function name, and up
   pops a list of candidates.  FEH!  Despite Smalltalk's lack
   of any kind of compile-time type checking (this is Pharo,
   not Strongtalk), again, I've never seen a long list.

So I don't see any reason to warp what people *read* away
from readability (function before argument) in order to pander
to the imagined limitations of writing tools.

 - if you have thousands of context-sen

 The point here is for intellisense-like features to work effectively in text 
 editors. The context must come to the left for that to work because...

And that is the claim I just demolished.  The order in which things are entered 
and the order in which they
are display does not have to be the same.  That is, after all, one thing that 
wizards do for you.

 That said, I did take a look in an old COBOL book. I didn't find either the 
 dot or the OF.

That is extremely odd, because while COBOL accepts both field OF record and 
field IN record,
people mostly use OF.  That must have been the world's worst COBOL book.  
(Not unlike the
Prolog textbook I met in a university book shop back when Prolog was new: every 
single example
was syntactically illegal.)
 
 Haskell already has a . for selecting a name through a context - we call that 
 context a module. According to Bertrand Meyer of Eiffel fame, a class is both 
 a module and a type.

The Haskell, Ada, Lisp, and CAML designers disagree.

 
 It would be nice to have some lexical disambiguation in this case - I might 
 prefer some other spelling, so long as the context is on the left and the 
 name is on the right. I was going to propose ?, but that's taken already 
 for implicit parameters - which I don't know the first thing about so I can't 
 guess possible conflicts.

It is by now difficult to find an operating system

Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-02-08 Thread Richard O'Keefe

On 9/02/2012, at 1:26 PM, Evan Laforge wrote:

 How about § then?  Surely at this late date we can allow ourselves *one* 
 non-ASCII character?
 The very name of it (*section* sign) suggests taking a part; and if you are 
 totally in love
 with dot, think of it as a dot with ponytails.
 
 I suggest record的field, or record之field for the more classically
 minded.  And why not some synonyms like recordのfield and
 recordकाfield, to be inclusive.

I chose the most available non-ASCII character I could find.
Set the criterion to be present in most ISO 8-bit character sets
and there are really only two candidates, section sign and degrees sign.
That hardly opens flood-gates.  It should certainly be limited to
characters that do not occur in a word, ruling out record մաս field.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-02-07 Thread Richard O'Keefe

On 7/02/2012, at 1:41 PM, AntC wrote:
 Richard, now you're just being playful.

Half fun and full earnest.

I *do* regard 'field OF record' as far more readable, intuitive, c
than 'record.field'.  With the number of meanings '.' already has in
Haskell, I *do* regard any attempt to overload it for field access
as deeply problematic and likely in practice to push much Haskell
code over the readability event horizon.

Anyone who has had occasion to write Fortran in the last 20+ years
has had to discover just how quickly you can get used to using
'record%field'.  I'm not really a COBOL programmer, but Prolog and
Erlang and Smalltalk taught me well that '.' in a programming language
can perfectly well mean exactly what it means in English: end of
statement.  I just do not buy the idea that the connection between
dot and field access is anything more than a habit of mind engendered
by a few languages or that it should be respected any more than the
habit of using a(i) -- Fortran, Simula 67, Ada, Dijkstra's notation,
PL/I -- or a[i] -- Algol 60, Algol 68, Pascal, C and its horde of
delirious imitators -- for array access.

The idea of using #field for a field access function has of course
an appeal to people familiar with ML or Erlang.  The connection with
ML is very close.  # is already used.  I rather like
field¶ record ([the] field[part] [of] record), with the ¶ Pilcrow
reminding me of Part.  Following ML, we could perfectly well allow
3¶ as well, meaning field 3 of any tuple that _has_ a field 3, the
type to be resolved by context.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-02-07 Thread Richard O'Keefe

On 8/02/2012, at 2:11 AM, Steve Horne wrote:

 On 06/02/2012 23:58, Richard O'Keefe wrote:
 On 4/02/2012, at 12:13 AM, Gábor Lehel wrote:
 All of this said, record.field is still the most readable, intuitive,
 and familiar syntax for selecting a field from a record that I know
 of.
 Having learned COBOL and Algol 68 before Haskell was dreamed of,
 I regard
 
  field OF record
 COBOL in particular isn't a well-known exemplar of readability. It's widely 
 seen as a bad joke. I have used COBOL myself, and largely agree with that, 
 with the proviso that I used COBOL a long time ago and have repressed most of 
 the details.

Like Fortran, COBOL has changed a *lot* since 'a long time ago'.
And if you did want to be fair, I didn't praise any other aspect of COBOL,
only the naturalness and readability of its notation for accessing a field
of a record.

 To be fair, field OF record isn't bad in that sense. However, it would 
 defeat the purpose of TDNR - the record isn't first, and therefore cannot be 
 used (given a left-to-right typing direction) as a context to offer member 
 name suggestions.

Yes, but why SHOULD there be a specific typing direction?
ML manages perfectly fine without it.

- #1;
stdIn:1.1-1.3 Error: unresolved flex record
   (can't tell what fields there are besides #1)
- #1 (true,3);
val it = true : bool
- #1 (42,stuff,false);
val it = 42 : int

If a right-to-left typing direction works well for #field record
in one language with constrained Hindley-Milner types, why would it
not work well for field¶ record in another language with constrained
Hindley-Milner types?

Why sacrifice readability (field name precedes record) for the sake
of, well, for the sake of what exactly escapes me.


 
 Also, even when I used COBOL (late eightees, early nineties) I'm pretty sure 
 it supported record.field.

That certainly wasn't the case up to COBOL-85.  I don't have a copy of COBOL 
2002,
so I can't speak for that, but COBOL 74 and COBOL 85 are the only candidates 
for those
dates, and they definitely did NOT support record.field.  Since '.' is the 
statement
terminator in COBOL, it's intrinsically unlikely.
(You did *check* a COBOL syntax summary, easily found on the web, before 
posting?  Which?)

 I don't remember using it, but then I don't remember using OF either - a 
 side effect of loading one record at a time into working storage and 
 effectively having a separate variable for each field. Anyway, assuming I'm 
 not suffering from worse-than-usual memory, COBOL accepted this common 
 convention.

Yes, you are suffering from worse-than-usual memory, and it was common practice 
in some shops
to use the same field name in multiple records, so that the CORRESPONDING 
language feature
would have some point!
 
 On the more general point of choosing an alternative operator, I agree to a 
 point, but familiarity does count for something. Others will point out that 
 Haskell dares to be different, but it's possible to be too daring and too 
 different. Being different for the sake of being different is for those 
 teenagers who go on about being random and whatever else they go on about 
 these days. The success of languages like Java, C# and C++ is based on 
 familiarity.

Using pointy brackets for generic parameters and :: for name scope were not 
familiar
when C++ introduced them.  And there was prior art in other languages for 
*both* of those.

One common prior practice, relevantly enough, was '.' for name scope.

 I think Haskell should dare to be different when there's a point to that - 
 where necessary based on a principle. We have type classes rather than OOP 
 classes for a principled reason. We have the IO monad rather than effectful 
 functions for a principled reason.

And if C++ can break with prior practice for a practical reason, Haskell can 
break with prior practice
for the same reason:  not breaking existing code, fitting into the existing 
language structure as well
as practical.
 
 If we don't have traditional field-selection for a principled reason

We don't have it because we don't need it.  And we don't need it because 
traditional field selection
serves two roles:  *selecting* one field and *updating* one field.  It's a poor 
way to handle the
latter use case, because one often needs to update more than one field.  It's 
not _that_ good for
the former use case either, if you need to access more than two fields from the 
same record.

In another functional language that I use, I've noticed what seems to me a 
marked increase in
readability by switching _away_ from field selection to pattern matching.

 I think that principle is a very weak one. If names can be scoped to modules, 
 to case expressions, to let expressions etc, why not to records? Of course 
 there's a difference, but IMO it's not an important one.

Nobody is arguing against names being scoped to records.
The argument is against using dot for it because dot has too many other uses.
We have already seen quite

Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-02-06 Thread Richard O'Keefe

On 4/02/2012, at 12:13 AM, Gábor Lehel wrote:
 
 All of this said, record.field is still the most readable, intuitive,
 and familiar syntax for selecting a field from a record that I know
 of.

Having learned COBOL and Algol 68 before Haskell was dreamed of,
I regard

field OF record

as the most readable, intuitive, and familiar syntax.  Given our
background in reading natural language text, most of us probably
thought once upon a time that '.' was the most readable, intuitive,
and familiar syntax for terminating a statement, and in COBOL, NDL,
and Smalltalk, it _is_.  There's certainly nothing about a dot
that suggests field selection, *unless* you happen to be familiar
with a programming language that does it that way.  If we are going
to let compatibility with Pascal or C or the like be our guide to
readability and intuition, when are we going to switch from ! and
!! for indexing to _[_]?



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-02-01 Thread Richard O'Keefe

On 1/02/2012, at 7:10 PM, Anthony Clayden wrote:

 
 On 1/02/2012, at 11:38 AM, AntC wrote:
 As soon as you decide to make 'virtual record selectors'
 just ordinary  functions (so they select but not update)
 , then you can see that field names  are also just
 ordinary functions (for selection purposes). So the
 semantics  for field 'selection' (whether or not you use
 dot notation) is just function  application. So
 Type-Directed Name resolution is just instance resolution.
 So  it all gets much easier.
 
 
 Richard O'Keefe wrote:
 ...  Making f x
 and x.f the same is pretty appealing, but it is imaginable
 that the former might require importing the name of f from
 a module and the latter might not. That is to say, it lets
 f and .f have completely different meanings. Oh the joy! 
 Oh the improved readability!  -- on some other planet,
 maybe.
 
 Hi Richard, I'm not sure I understand what you're saying.

I'm saying that if dot is to do anything at all that it does
not do now, then f x and x.f being identical is sort of OK (
though it does rather clash with f . g), but any differences
between them would be confusing.
 
 This is all so weird I'm inclined to say that one-sided dot
 is probably a syntax error, and reject it.

It wasn't a syntax error, it just wasn't intended to be
Haskell code at all, just an ad hoc English text abbreviation
for f occurring after a dot.

Of course (x.) = \f - f x
and   (.f) = \x - f x
are precisely the kind of sections people will expect to be
legitimate once they've seen (x.f)...

Of course, if f x and x.f mean the same thing, we don't need
x.f, do we?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Some thoughts on Type-Directed Name Resolution

2012-01-31 Thread Richard O'Keefe

On 1/02/2012, at 11:38 AM, AntC wrote:
 As soon as you decide to make 'virtual record selectors' just ordinary 
 functions (so they select but not update), then you can see that field names 
 are also just ordinary functions (for selection purposes). So the semantics 
 for field 'selection' (whether or not you use dot notation) is just function 
 application. So Type-Directed Name resolution is just instance resolution. So 
 it all gets much easier.

I'm reminded of Pop-2, where f(x) and x.f meant exactly the same thing.
Overloading was a (dynamic) property of f, not a property of dot.

Ada had two reasons for adding dot syntax, and much as I admire Ada,
I'm not sure that I agree with either of them.
One was to be more familiar to programmers from other languages, but
since there remain interesting differences between x.f in Ada and x.f
in other languages, it's not clear to me how much of a kindness that
really is.  The other is that x.f means basically what f(x) would have,
*had f(x) been legal*; the aim was to be able to use methods without
having to important everything from a module.

Now that might have some relevance to Haskell.  Making f x and x.f the
same is pretty appealing, but it is imaginable that the former might
require importing the name of f from a module and the latter might not.
That is to say, it lets f and .f have completely different meanings.
Oh the joy!  Oh the improved readability!  -- on some other planet, maybe.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Haskell Cafe] strict version of Haskell - does it exist?

2012-01-30 Thread Richard O'Keefe

On 31/01/2012, at 5:47 AM, Doug McIlroy wrote:

 Is there any document describing why there is no ghc --strict flag
 making all code strict by default?
 Wouldn't such a '--strict' flag turn Haskell/GHC into a better C/gcc
 compiler?
 
 I agree that a strict flag would turn Haskell into C--but that's a
 perversion of Haskell.

On the other hand, a designed-to-be-strict language-and-libraries
with close-to-Haskell *syntax* would be nice.  I recently
described F# as combining the beauty of Caml with the functional
purity of C# -- both of course are like the snakes of Ireland.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] The Riddle of the Buddhist Monk

2011-12-20 Thread Richard O'Keefe

On 21/12/2011, at 4:34 AM, Patrick Browne wrote:

 I have simplified the code using constructors and export.
 I can evalute the qualified expressions but I do not get the expected results.
 
 module  MONKONMOVE (module MONKONMOVE)where

When I see MONKONMOVE I think what's a MONKON?

Even the Java designers say if you paste together several words all in
capitals, put underscores between them.  So is it MONKON_MOVE,
MONK_ONMOVE, MONK_ON_MOVE (what does that mean?) or something else?

The classic puzzle has nothing to do with monks, Buddhist or otherwise.
It goes something like this:
One morning you start climbing a mountain at 8am and reach the
 top by 6pm.  You stay there overnight.  Next morning, you start
 down on the same path at 8am and reach the bottom by 6pm.
 Prove that there is some time of day 8am = t = 6pm such that
 you are at the same place at time t on both days.

The solution as given by Lewis Carroll is
Pretend there are two people doing the trip, one up and one
 down, on the same day.  Clearly they must meet.  QED.

So what exactly is the program supposed to do?  The problem lacks
the information from which a specific value of t can be computed;
all that can be determined is that *some* value of t must exist.
However, that proof depends on the *continuity* of the time-place
mappings:
if f, g: [0,1] - [0,1] are continuous functions
and f(0) = 0, f(1) = 1, g(0) = 1, g(1) = 0
then there exists t in [0,1] such that f(t) = g(t)
and I don't see anything in the code that talks about continuity.
It should be clear that daredevils going up and down the mountain
on sufficiently springy pogo sticks (or electrons jumping in their
insouciant quantum way) need *not* meet.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Alternative] summary of my understanding so far

2011-12-18 Thread Richard O'Keefe

On 17/12/2011, at 3:35 PM, Matthew Farkas-Dyck wrote:

 On 15/12/2011, Gregory Crosswhite gcrosswh...@gmail.com wrote:
 1) Documentation really needs to be improved
 2) some/many cannot be physically separated from Alternative, but there
 *might* be an advantage to creating a subclass for them anyway purely for
 the sake of conveying more information about a type to users
 3) Maybe and [] are sensible instances of Alternative, even if many/some
 often enters an infinite loop.
 4) It is possible to provide special instance of many/some that satisfy the
 equations of many/some, with the slight disadvantage that these solutions
 are no longer the least solutions.
 
 Based on all of this, at this moment in time it seems to me that the most
 sensible way forward is to fix the documentation, tweak the definition of
 Alternative to no longer require the least solutions of the equations, and
 then to adopt the new instances for Maybe and [].
 
 Thoughts?
 
 (1) If we do (4), then the documentation ought to be adequate as-is.

No.  Not by a country mile.
It's better than non-existent.
It's better than misleading.
But it's not even on the same *continent* as adequate.

A lot of Haskell packages have pretty much the same level of documentation.
And I didn't pay one single cent for it, so I can't scream too loudly.
But let's not kid ourselves.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Alternative] summary of my understanding so far

2011-12-18 Thread Richard O'Keefe

On 19/12/2011, at 3:44 PM, Gregory Crosswhite wrote:
 So what do you all think about my own suggestion for the documentation?

It is an improvement.

Documentation for a library module needs to start by telling people what
it is for.  For a particular function, someone needs to know very quickly
is this what I am looking for? is this the kind of thing I _should_ have
been looking for?

One important thing about the Monoid instance for Maybe is that

There is more than one way to turn Maybe into a Monoid.
One way treats Maybe a as a truncated [a] and does not
depend on any properties of a, it takes
mappend (Just x) _ = Just x

The other requires a itself to be a Monoid, and lift's
a's operations to Maybe a:
mappend (Just x) (Just y) = mappend x y
The latter, more interesting, case is the one implemented here.

(In the same way, bounded integers like Int can be viewed as Monoids in
at least 4 ways, only two of which are predefined in Data.Monoid.
   mempty = minBound
   mappend = max

   mempty = maxBound
   mappend = min
are the other two.  In fact these apply to anything that is Bounded and Ord.)

The point is not that your proposed documentation doesn't say that, but it
doesn't say that the MonadPlus reading is a *LEGITIMATE* way to view Maybe
as a Monoid, which happens not to have been the one chosen; also that this
possibility that the Monoid instance you WANT might not be the one you GET
is to me the first thing you need to understand about it.  Yes, there is a
blanket warning about this, but it specifically mentions Num.  Whenever it
is possible for a reasonable person to want a Monoid instance and get one
that is not the instance s/he wanted, it's worth highlighting in the docs.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [Alternative] summary of my understanding so far

2011-12-18 Thread Richard O'Keefe

On 19/12/2011, at 5:46 PM, Gregory Crosswhite wrote:
[improved Monoid documentation]

I would go so far as to point out that mappend is a generalisation of
Data.List.sum, Data.List.product, Data.List.and, and Data.List.or,
where the initial value and combining rule are implied by the type.


 
 This additional information unfortunately makes the documentation more 
 verbose,

One man's more verbose is another man's less cryptic.

I really don't like the emphasis on Num, as if it was a bizarre feature of
Num that there's more than one Monoid reading for it.  This is a *common*
property of data types.  For example, Sets can be seen as monoids with
empty and union; and Sets with a universe can also be seen as monoids with
universe and intersection.

The more I think about it, the less idea I have _what_ to expect for _any_
instance of Monoid.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to get a file path to the program invoked?

2011-12-15 Thread Richard O'Keefe

On 16/12/2011, at 11:55 AM, Brandon Allbery wrote:
 
 Note that exec -a is a bash-ism and not portable to POSIX shells

Recent versions of ksh also support this, so it's not just bash.
But there are certainly a lot of POSIX shells that don't, including
the version of ksh on my main machine.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Splitting off many/some from Alternative

2011-12-14 Thread Richard O'Keefe
Suppose you have a typeclass C with operations x y z w
and you decide that there's a real difference, that more
things can support x y than can support z w.

If you then split
C' x y
  C z w

then all existing *uses* of C are happy.
But all the existing *instances* of C have to be split
into an instance for C' and an instance of C.

Alternative is of course the example we're discussing,
but this is a problem that's going to keep on coming up.

Suggestion 1:
If someone writes an instance declaration that says
some type T is an instance of some class C, and there
is a parent class C' for which the compiler can't find
a declaration that T belongs to C', BUT all the operations
needed for membership in C' are actually in the instance
declaration for C,

why can't the compiler automatically construct an instance
declaration that T is an instance of C' and move the
necessary definitions there?

so instance (context) = C T
  where x =
y =
z =
w =
= instance (context) = C' T
 where x =
   y =
   instance (context) = C T
 where z =
   w =

I dare say there are all sorts of things that could go wrong
with this, but I am too dumb to see what they are.  I hasten
to add that I have no idea whether this could be extended to
multiparameter type classes or not.

Something like this would make the refactoring of C into C'+C
pretty much painless; the compiler could warn about unfactored
instance declarations so you could migrate gradually to the new
structure, but it wouldn't break working code.

Suggestion 2:
Make it explicit.  Have
instance (context) = C T
  deriving C' T  -- new feature
  where x =
y =
z =
w =
do the refactoring.  This requires a one-line explicit change
for each existing instance declaration, so there's *some* pain,
but it's *less* pain than a complete manual refactoring, which
you might need to do repeatedly while working out a design.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Splitting off many/some from Alternative

2011-12-12 Thread Richard O'Keefe
Perhaps the most urgent change would simply be better documentation
for what 'some' and 'many' are all about.  Some examples would be nice.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to get a file path to the program invoked?

2011-12-04 Thread Richard O'Keefe

On 4/12/2011, at 7:32 PM, wren ng thornton wrote:
 Part of the problem is that, as Alexey says, the first element of argv is 
 just whatever is passed to exec, which is not guaranteed to be a complete 
 path, a canonical path, or any other specific thing we'd desire. It's not at 
 all straightforward to determine the actual location of the executable, 
 especially not in a platform-independent manner. argv[0] can't be trusted, 
 scanning through $PATH isn't guaranteed to find it (and even if you find 
 something of the right name, it's not guaranteed to be the correct 
 executable), etc etc.

In particular, with posix_spawnp(), the $PATH that is used to find the 
executable
and the $PATH in the environment that the executable starts with can be two 
different things.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] could not create compact unwind

2011-12-01 Thread Richard O'Keefe
I just did
cabal install cabal-install
on a Mac running Mac OS 10.6.8 and got the eventual response
[44 of 44] Compiling Main ( Main.hs, 
dist/build/cabal/cabal-tmp/Main.o )
Linking dist/build/cabal/cabal ...
ld: warning: could not create compact unwind for .LFB3: non-standard register 5 
being saved in prolog
Installing executable(s) in /home/cshome/o/ok/.cabal/bin

I also had this problem today:
m% cabal install quickcheck   
Resolving dependencies...
cabal: dependencies conflict: ghc-6.12.3 requires pretty ==1.0.1.2 however
pretty-1.0.1.2 was excluded because ghc-6.12.3 requires pretty ==1.0.1.1

What's the procedure for wiping everything out and starting again?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Using Data,TypeLevel.Num

2011-11-30 Thread Richard O'Keefe
I'd like to write a small module using Data.TypeLevel.Num.
It has type level constants, variables, addition, and maximum, which is pretty 
much
all I need.  But I've never used it before, and there's one thing I want to do 
that
I don't understand how to do.

val :: Nat t = t - Int
val _ = t as an Int

e.g., if x :: D8 then val x = 8.

The value-level reflection functions in Data.TypeLevel.Num.Ops all seem to be
'undefined'.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskellers.com polls (specifically, the mascot question)

2011-11-27 Thread Richard O'Keefe

On 25/11/2011, at 11:01 PM, Michael Snoyman wrote:

 Hi all,
 
 I've just added the polls feature to Haskellers.com, and created an
 initial poll about mascots[1]. Let me know if there are any issues.

You must be logged in to submit an answer.

I don't know how many opinions you'll lose from that, but you'll
certainly lose mine.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A Mascot

2011-11-24 Thread Richard O'Keefe

On 23/11/2011, at 4:40 AM, Karol Samborski wrote:

 And what about a cat? The cat is associated with elegance and a kind of magic.
 Please take a look: http://origami.bieszczady.pl/images/kot.png

I could never in my whole life draw as well as that.
But they are *skittles*, just like Lamb Da.
Cute.  Stiff. Lifeless.  Easy to knock over.
Reminds me of a salt shaker and pepper pot of my mother's.

The collar's good, but
the lambda is just pasted on for no intrinsic reason.

How about a relatively smart animal _in motion_ with the
lambda formed naturally by the positions of its limbs?
Maybe a leaping gelada with its hind legs forming the
\ of the lambda and its tail and back forming the / ?
Smart, social, moving, and of course, furry.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A Mascot

2011-11-21 Thread Richard O'Keefe

On 21/11/2011, at 9:22 PM, Karol Samborski wrote:

 Hi all,
 
 This is my sister's proposition:
 http://origami.bieszczady.pl/images/The_Lamb_Da.png
 
 What do you think?

It looks like a skittle with a baby bonnet.
C'est mignon, mais ce n'est pas la guerre
as Pierre Bosquet almost said.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is it possible to represent such polymorphism?

2011-10-02 Thread Richard O'Keefe

On 3/10/2011, at 7:15 AM, Du Xi wrote:
 
 I guess this is what I want, thank you all. Although I still wonder why 
 something so simple in C++ is actually more verbose and requires less known 
 features in Haskell...What was the design intent to disallow simple 
 overloading?

It's not SIMPLE overloading you are asking for,
but AD HOC overloading, which may look simple, but really isn't.

Taking your C++ f() example, in what sense are the two functions _the same 
function_?



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parameters and patterns

2011-10-02 Thread Richard O'Keefe

On 2/10/2011, at 3:27 AM, José Romildo Malaquias wrote:

 Hello.
 
 When studing programming languages I have learned that parameter is a
 variable (name) that appears in a function definition and denotes the
 value to which the function is applied when the function is called.

Who told you that?  Variables are one thing and names are another.

I think you are talking about what the definition of Algol 60
called a formal parameter.

It's worth noting that formal parameters in Algol 60 could be
labels, switches, and procedures, while there were no label variables,
switch variables, or procedure variables.   Names and variables are
*really* different things.

For what it's worth, the Haskell 2010 report appears to use parameter
informally to mean
 - a formal parameter of a function
 - a formal parameter position of a type constructor
 - a constant characterising a numeric type
but I don't see a precise definition anywhere.
 
 Argument is the value to which the function is applied.

I think you are talking about what the definition of Algol 60
called an actual parameter.
The word argument is very often used for formal parameters too.

For what it's worth, the Haskell 2010 report appears to use argument
informally to mean
 - an actual parameter of a function
 - an actual parameter of a type constructor
but I don't see a precise definition anywhere.

 Now I am not sure how to apply these concepts to Haskell, as Haskell
 uses pattern matching to deal with argument passing to functions.

Realise that what you thought you knew was a half truth:  all formal
parameters are patterns, and all (new) identifiers are patterns, but
not all patterns are identifiers.  (And of course that is a half truth
too.)
 
 For instance, in the definition
 
  f x = 2 * x + 1
 
 x is a parameter, and in the application
a parameter, or an argument, or a formal parameter, or a formal
argument, or what you please.
 
  f 34
 
 34 is an argument
an argument, or a parameter, or an actual parameter, or an
actual argument, or what you please.
 
 But in the definition
 
  g (_:xs) = xs
 
 what is the parameter of the function g? Is it the pattern (_:xs)? If so
 then a parameter is not necessarily a variable anymore, and that seems
 very strange.

Why?  Patterns are a generalisation of variables.
Practically all functional languages since lisp use pattern matching,
and even Lisp these days has destructuring-bind.

 And what is xs? Is it a parameter, although it does not
 denote the value to which the function is aplied, but just part of it?

This is the point where some people would say this is just semantics.
The problem is that it is precisely NOT semantics.  You clearly understand
the *semantics* here; what's bothering you is the lexical level, what to
call something.  The first occurrence of xs is a binding occurrence of an
identifier inside a formal parameter and the second occurrence is an
applied occurence of an identifier.  

The Haskell 2010 report often uses parameter to refer to a
formal parameter _place_ of a function rather than to the text
that fills that place in a function definition.  On that reading,
(_:xs) is *not* a parameter, it's a pattern that appears in the
first parameter *position* of g, and parameters as such do not have names.

 I am writing some slides to use in my functional programming classes,
 but I am not sure how to deal with these terms.

Consistently with the text-book.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-26 Thread Richard O'Keefe

On 23/09/2011, at 4:06 PM, Chris Smith wrote:

 On Fri, 2011-09-23 at 11:02 +1200, Richard O'Keefe wrote:
 I do think that '..' syntax for Float and Double could be useful,
 but the actual definition is such that, well, words fail me.
 [1.0..3.5] = [1.0,2.0,3.0,4.0]   Why did anyone ever think
 _that_ was a good idea?
 
 In case you meant that as a question, the reason is this:
 
Prelude [0.1, 0.2 .. 0.3]
[0.1,0.2,0.30004]

That shows why it is a *BAD* idea.  
0.3 comes out as 0.29998890
so the final value is clearly and unambiguously
*outside* the requested range.

 Because of rounding error, an implementation that meets your proposed
 law would have left out 0.3 from that sequence, when of course it was
 intended to be there.

But the output shown does NOT include 0.3 in the sequence.

0.3 `elem` [0.1, 0.2 .. 0.3]

is False.

  This is messy for the properties you want to
 state, but it's almost surely the right thing to do in practice.

I flatly deny that.  I have access to several programming languages
that offer 'REAL DO', including Fortran, R, and Smalltalk.  They all
do the same thing; NONE of them overshoots the mark.

If I *wanted* the range to be enlarged a little bit,
I would enlarge it myself:  [0.1, 0.2 .. 0.3+0.001] perhaps.

  If the
 list is longer, then the most likely way to get it right is to follow
 the behavior as currently specified.

I don't see the length of the list as having much relevance; if the
bug shows up in a list of length 3, it is clearly not likely to be
any better for longer lists.  This is NOT by any stretch of the
imagination, it is a BUG.  If you have used REAL DO in almost any other
programming language, you will be shocked and dismayed by its behaviour
in Haskell.

Programming constructs that are implemented to do what would probably
meant if you were an idiot instead of what you *asked* for are
dangerous.

 If you can clear this up with a better explanation of the properties,
 great!  But if you can't, then we ought to reject the kind of thinking
 that would remove useful behavior when it doesn't fit some theoretical
 properties that looked nice until you consider the edge cases.

I don't see any useful behaviour here.
I see an implausibly motivated bug and while I _have_ written REAL DO
in the past (because some languages offer only one numeric type), I
cannot imagine wishing to do so in Haskell, thanks to this bug.  What
I want now is a compiler option, on by default, to assure me that I am
*not* using floating point numeration in Haskell.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] instance Enum Double considered not entirely great?

2011-09-26 Thread Richard O'Keefe

On 26/09/2011, at 11:08 AM, Daniel Fischer wrote:
 
 And: distinguish NaNs or identify them all?
 I lean towards identifying them all, I've never cared for whether they come 
 from 0/0, Infinity - Infinity or what, but I could be convinced.

There are very many bit patterns that count as NaNs, but only two
classes worth distinguishing: qNaNs and sNaNs.  The payload bits
have no cross-platform significance; if it comes to that, there are
two opposing conventions for which value of the is-it-quiet bit, so
it's not clear if any distinction is worth making.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   3   4   5   >