I thought I'd reply to recent LL1 traffic with one message instead of
several.

Paul Prescod writes:
> If succincness=power, COBOL would be not just be a dying programming
> language but a failed *idea*. The very idea of trying to make a
> programming language "like English" would be ridiculous. Clearly at
> least some language designers disagree that succinctness is an
> overriding goal. COBOL was supposed to bring programs to the
> programmer's mental model rather than vice versa. (I don't think it
> succeeded, but I'm glad someone made the attempt)

I think COBOL was supposed to make programs comprehensible without
special training on the part of the reader, and I believe it
succeeded.  I have heard from consultants who worked with COBOL and
non-COBOL shops that non-programmers in the COBOL shops regularly read
and understood source code, an event that was rare in non-COBOL shops.

The principle Paul Prescod puts forth is a related one: any Python
programmer should be able to read and understand any Python program,
even in a problem domain they don't know anything about:

> When I write code I always have in the back of my mind whether an
> occasional Python programmer could follow it. I will use an advanced
> feature if it dramatically simplifies the code. Otherwise, I will
> avoid it.

On infix versus prefix:

> That presumes that infix notation is merely a whimsical preference that
> could change soon whereas I think that it probably evolved over millenia
> as the best solution to keeping operators close to their operands.

That's what I thought, too, but then I examined some expressions.

In ((2 - 3) * (4 + 1)) / (11 + 3), the / is separated from its child
operators (* and the second +) by six and two tokens respectively, for
a sum of eight.  * is separated from its child operators by two and
two tokens, for a total of four.  The other three operators are
adjacent to their operands, so we have a total of only twelve
intervening tokens.

In the prefix notation / * - 2 3 + 4 1 + 11 3, / is separated from its
children by zero and seven tokens; * from its by zero and three; and
each of the +s and the - by zero and one, for a total of thirteen,
slightly more than in parenthesized infix notation.  If the order of
/'s children were swapped, prefix would keep the operators closer to
their operands than infix.

I think Dijkstra's comment that infix notation allows you to not
express order of operations when it doesn't matter is closer to the
mark, although with Lisp's variadic operators, that advantage seems
moot.


On binary operators versus functions; Paul Prescod wrote, with regard
to reduce:
> There are only a few binary infix operators (compared to the number of
> things in the universe that are already functions). So you can't get by
> without the for-loop (or recursion or similar alternatives).

Generally, pretty much any function whose domain is sets or lists of
some type can be expressed in terms of a reduction with a binary
function.  For example, you can express a curried 'map' in terms of
this binary function:

class mapper:
    def __init__(self, func):
        self.func = func
    def __call__(self, list, item):
        return list + [self.func(item)]

squares = reduce(mapper(lambda x: x * x), range(10), [])

(Python's 'reduce' works from left to right, like Haskell's foldl.)

Here's a Common Lisp version of the same that works from right to left:
* (defun mapper (fn) (lambda (item list) (cons (funcall fn item) list)))
MAPPER
* (reduce (mapper (lambda (x) (* x x))) '(1 2 3 4 5 6 7 8 9 10) :from-end t
        :initial-value '())
(1 4 9 16 25 36 49 64 81 100)

Likewise, you can express 'filter':

class filterer:
    def __init__(self, func):
        self.func = func
    def __call__(self, list, item):
        if self.func(item): return list + [item]
        else: return list

threes = reduce(filterer(lambda x: x % 3 == 0), range(20), [])


On syntactic abstraction in Python:

Paul Graham writes, quoting Paul Prescod:
> > You can do this, if you really need to, but you do it at runtime, not
> > compile time. As I said before, it simplifies the programmer's mental
> > model if they have to think only about runtime OR compile time, but not
> > both.
> 
> When you say you do it at runtime, do you mean you in effect
> call eval?  Because you can do that in Lisp too; it's just
> considered a hack.

Paul Prescod had said (quoting Peter Norvig) that you have eval and
operator overloading, so you don't miss macros all that much.  It's
true that Python's operator overloading lets you extend the language
in surprising ways; for example, one afternoon, I wrote a simple
symbolic math package that let you write algebraic expressions in
Python.  Here's an interactive Python session using it:

>>> import sym
>>> x, y = sym.vars.x, sym.vars.y
>>> myexpr = 3 * x * y + 2 * x - y + 13
>>> print myexpr
(((((3 * x) * y) + (2 * x)) - y) + 13)
>>> myexpr.where(x=3, y=3)
43
>>> myexpr.where(x=0, y=0)
13
>>> print myexpr.derivative(x).simplified()
((y * 3) + 2)


This was the 120 lines of code in
http://lists.canonical.org/pipermail/kragen-hacks/2000-December/000278.html
--- I posted it to comp.lang.python, and I think I actually hacked it
to be able to differentiate a bunch more things before I lost
interest, but I lost that version.

Since you can also "overload" attribute access, you can also do things
like this:

>>> import xml
>>> x = xml.xml
>>> x.p("This is some bold text: " + x.b("some text") + " and " + x.a("this is a 
>hotlink", href="http://www.example.com";) + " and this is an image: " + 
>x.img(src="http://www.example.com/dum.png";))
'<p>This is some bold text: <b>some text</b> and <a href="http://www.example.com";>this 
is a hotlink</a> and this is an image: <img src="http://www.example.com/dum.png"; 
/></p>'

That code is here:
http://lists.canonical.org/pipermail/kragen-hacks/2000-September/000266.html

So you can go some distance in the direction of syntactic abstraction
in Python with operator overloading.  But I often wish for more, for a
variety of reasons explained in _On Lisp_.  Most often, I wish for
unwind-protect macros (Scheme's with-open-file, for example, and
sometimes I wish for an analogous with-lock-held) and new binding
constructs, although I would like these to be merely a more convenient
interface to existing functions.

Perhaps I am too polluted with C++ --- the operator overloading tricks
above in Python also work in C++, and unwind-protect "macros" in C++
are simply auto objects with destructors.

Paul Graham writes:
> For example, a lot of Viaweb's competitors used languages like C++,
> presumably partly because they were "mainstream" languages and
> anyone who came along later to maintain them would have an easy time
> of it.  But this same technology made it slow to *develop* programs.
> Result: they are out of business, and no one is reading all their
> nice new-hire friendly mainstream code.

Didn't Viaweb have a substantial body of code in C or C++, even in the
early days?  I seem to recall that you were the only one of the three
founders whose first choice was Lisp.

-- 
<[EMAIL PROTECTED]>       Kragen Sitaker     <http://www.pobox.com/~kragen/>
Silence may not be golden, but at least it's quiet.  Don't speak unless you
can improve the silence.  I have often regretted my speech, never my silence.
-- ancient philosopher Syrus (?) via Adam Rifkin, <[EMAIL PROTECTED]>


Reply via email to