On Sun, Feb 17, 2008 at 11:57:28PM -0800, Wade Curry wrote:

I have to agree with the point Chuck makes in a separate post...
Learning a language, and learning a paradigm are very different.
I /think/ you would agree that learning the paradigm is more
difficult, given your recent comments about learning recursion via
scheme.  Learning both a new language and a new paradigm usually
means I'm going to have to make several attempts.  And I usually
measure difficulty by how many attempts I have to make, rather than
the amount of time.  I'm self-taught, inexperienced, busy, and I'm
anal about the quality of my code even when I'm only starting out.

Getting to where I could program reasonably in Haskell took me about 4-5
attempts.  It's the only language that's ever taken more than one.

Haskell is by far the most difficult language I've ever learned, by a large
factor.  However, I feel it was very rewarding to do so, even if I don't
end up using it for everything useful.

I've been looking in to Common Lisp for a while, now. (I skipped
out on the SICP group mostly because I didn't want to get
side-tracked).  I've found that my experience learning Lisp is a
lot like when I was learning python and OOP at the same time.  I'd
played with Apple Basic, shell, and some programmers told me,
"learn one and the rest are easy."  I know SJS will want to blame
python ;-)  but I think the problem had to do with unlearning my
wrong assumptions, overcoming ignorance, and learning an entirely
new way to "parse" problems.

I'm learning Common Lisp now, but coming from knowing Scheme.  It didn't
take me long to get the core part of the language down, and now I'm
focusing on CLOS.  I've just finished the Keene book (Object-Oriented
Programming in Common Lisp: A Programmer's Guide to CLOS), and am now
starting "The Art of the Metaobject Protocol".  The AMOP book is largely
useful because it is an example of a large system designed using the CLOS
model, and it's CLOS itself.

Conceptually, I see little difference between the meta-languages
that David mentions, and the APIs that have to be learned with any
other language. When a certain API stinks, we don't /usually/ blame
the language; we blame the API.  It seems inconsistent to blame the
language if a module is poorly documented or the meta-language is
poorly designed in haskell.

My comment was about the _good_ meta languages.  When understood, they have
an elegance an conciceness to them.  The problem is that the language is so
flexible that it takes some reading and understanding just to know what the
syntax of the code even is.  It really makes me appreciate the uniformity
found in Common Lisp.  Even when people extend the syntax, they tend to do
so in ways that are familiar to someone using Lisp.  Seeing (with-foo ...)
suggests that something to do with foo will happen in the inside of the
form.

Of course, thinking and speaking are rather universal, whereas most
people never have to select a programming language, so the
comparison isn't entirely fair.  And, being easy to learn /is/ a
good thing in a programming language.  Still, I hate to see a good
tool be passed over because the effort required to learn it is
significant.  Seems rather like pointy-haired reasoning to overlook
the results that could come from mastery.

I feel strongly that learning other paradigms is important to being a good
programmer.  I've worked with too many programmers that just couldn't get
past the paradigms they were familiar with.  They will struggle through an
awkward way of doing something that has a fairly elegant solution, even in
C, if they would just think of the flow being driving by the data, or
something to that effect.

Learning Haskell liberated me from always thinking of the flow of execution
being tied to the obvious flow through the program.  Learning what aspects
of lazy evaluation in Haskell interact poorly with the outside world helps
understand why some cases are going to be better not done in a lazy way,
even if the laziness would seem to help initially.

> The problem is that Haskell's entire execution model is so
> completely different this is pretty hard to do.  Given that,
> it's not that bad at both calling to C and calling back into
> Haskell, at least once I learned the meta-language for doing so
> :-)

David, would you mind expanding on that statement, please?

First or second of the statements?  I'll guess you mean the execution
model.

Haskell's code is entirely declarative (ignoring, for this illustration how
monad's bring the appearance of imperative constructs in).  It's up to the
implementation to figure out an order to evalute things.  In fact, the
implementation is not allowed to evaluate an expression earlier than
necessary unless it can "prove" that the results would be equivalent.

Practically, this is generally implemented as a directed (possibly cyclic)
graph that is both constructed and reduced as the execution of the program.
The faster compilers will detect operations that can be done strictly (such
as integer math) and perform them conventionally, but most of the code flow
consists of data structures that start as code/data pointers that compute a
result, and replace the pointer with a computed value.  It doesn't really
have any kind of notions of things like a stack.

An interesting consequence is that operations can often have very different
space requirements in Haskell than a conventional strict language.  One
example is that often in Haskell, tail recursion is the _worst_ way of
expression a term.  It is usually much better to have the recursive call
not in the tail position so that the computation can generate a new heap
cell and allow the previous one to be garbage collected.  A tail call might
require the entire call chain to reside in the heap before getting to the
end result.  This is very different than other languages.

Calling out to C usually involves either copying data, or "freezing"
buffers (to prevent the GC from moving them), setting up an environment
where C can run, and calling into it.  Callbacks into C involve bouncing
back and forth between these.  For an interpreter, this is a lot eaiser to
do, since the interpreter is just normal C code.  But, the native code
generator will have to bounce back and forth.

> What's interesting is that he didn't even mention the usual
> problems people have with Haskell: space leaks.  It's not
> always very clear when computations will run and it can be very
> easy to build up a large data structure of potential
> computations, instead of computing the desired result.

What does the above mean?

An example of what I mentioned, how tail recursion can explode space on the
heap, whereas a non-tail call might be fine.

Maybe he's not as stupid as you seem to think he is.

I didn't think David implied stupidity at all, only inexperience
(which I can strongly identify with).  After I get some lisp under
my belt, I may very well give haskell a try.  It seems only fitting
since Haskell Curry was my grandpa... well, maybe not.  He does
have a great last name, though.  ;-)

I wasn't implying stupidity, just that he hadn't gotten far enough with
Haskell to run into some of the common problems.

Simon PJ's "Wearing the Hair Shirt" paper
<http://research.microsoft.com/~simonpj/papers/haskell-retrospective/index.htm>
is an excellent overview of much of Haskell, both the good and the bad.

I attended ICFP a few years back, and met Simon.  The conference was kind
of weird.  To be honest, I felt like a moron.  I couldn't even carry on
conversation with these people.  Maybe the mathemeticians in our midsts
would have felt more comfortable there, but I didn't even get the jokes.

Dave

--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg

Reply via email to