Re: cctalk Digest, Vol 80, Issue 5

2021-05-07 Thread ben via cctalk

On 5/6/2021 6:43 PM, Paul Koning via cctalk wrote:
I wonder.  Consider object oriented programming, where objects that have 
all manner of stuff inside are treated as a unit and have operations 
performed on them.


I don't buy the class model of OOP. Classes are LIKE each other not 
someting that can shared between them. If that was the case move 
X-windows to MS-windows would be just X->XWIN = convert(X->M$WIN) for 
screen display.
  > Agreed on stack languages.  While there's nothing inherently hard 
about them, they don't fit the way we're taught to handle formulas all 
the way from grade one.  In fact, while APL is infix, it's 
right-associative, which is a definite problem.  It's unfortunate 
Iverson didn't fix the assignment operator problem the way POP-2 did, by 
pointing it to the right so all operators could be left-associative.



I think of variables and the stack nesting of variables.

I find it confusing. Where are varibles for a=b+c. found. At what level 
can you find just who defined what with out reading the whole 
program.How many links must be checked to get your data.


Ben.


Re: cctalk Digest, Vol 80, Issue 5

2021-05-06 Thread Paul Koning via cctalk



> On May 6, 2021, at 5:53 PM, Adam Thornton via cctalk  
> wrote:
> 
>> ...
> Yes, that.  C is a great assembly language preprocessor for a PDP-11.  The
> PDP-11 is a beautiful, intelligible architecture, where things happen one
> at a time in sequence.  This is easy to think about.  Unfortunately it's
> got very little to do with the way that modern high-performance silicon
> gets stuff done.

Sort of.  But while a lot of things happen in parallel, out of order, 
speculatively, etc., the programming model exposed by the hardware still is the 
C sequential model.  A whole lot of logic is needed to create that appearance, 
and in fact you can see that all the way back in the CDC 6600 "scoreboard" and 
"stunt box".  Some processors occasionally relax the software-visible order, 
which tends to cause bugs, create marketing issues, or both -- Alpha comes to 
mind as an example.

> (Aside: it's also weird that the one-thing-at-a-time sequencing is the
> thing that feels logical and intuitive to us since it is absolutely not how
> our brains work.)
> 
> I would argue that Forth and Postscript are hard to understand for a
> different reason than APL: APL is inherently vectorized, and requires, more
> or less, that you treat matrices as single entities.  Not many people's
> brains work that way.  

I wonder.  Consider object oriented programming, where objects that have all 
manner of stuff inside are treated as a unit and have operations performed on 
them.

Agreed on stack languages.  While there's nothing inherently hard about them, 
they don't fit the way we're taught to handle formulas all the way from grade 
one.  In fact, while APL is infix, it's right-associative, which is a definite 
problem.  It's unfortunate Iverson didn't fix the assignment operator problem 
the way POP-2 did, by pointing it to the right so all operators could be 
left-associative.

If Martin Rem's associons ever take off (see my previous email) that will 
require a similar mental process as the one for APL of treating composite data 
as single entities.

paul




Re: cctalk Digest, Vol 80, Issue 5

2021-05-06 Thread Adam Thornton via cctalk
From: Liam Proven 
> To: "Discussion: On-Topic and Off-Topic Posts" 
> Subject: Re: Motor generator
>
> I think because for lesser minds, such as mine, [APL is] line noise.
>
> A friend of mine, a Perl guru, studied A-Plus for a while. (Morgan
> Stanley's in-house APL dialect.) He said to me that "when I came back
> to Perl, I found it irritatingly verbose..." and then was immediately
> deeply shocked at the thought.
>
> I seriously think this is why Lisp didn't go mainstream. For a certain
> type of human mind, it's wonderful and clear and expressive, but for
> most of us, it's just a step too far.
>
> Ditto Forth, ditto Postscript, etc.
>
> Plain old algebraic infix notation has thrived for half a millennium
> because it's easily assimilated and comprehended, and many arguably
> better notations just are not.
>
> The importance of being easy, as opposed to being clear, or
> unambiguous, or expressive, etc., is widely underestimated.
>
>
Yes, that.  C is a great assembly language preprocessor for a PDP-11.  The
PDP-11 is a beautiful, intelligible architecture, where things happen one
at a time in sequence.  This is easy to think about.  Unfortunately it's
got very little to do with the way that modern high-performance silicon
gets stuff done.

(Aside: it's also weird that the one-thing-at-a-time sequencing is the
thing that feels logical and intuitive to us since it is absolutely not how
our brains work.)

I would argue that Forth and Postscript are hard to understand for a
different reason than APL: APL is inherently vectorized, and requires, more
or less, that you treat matrices as single entities.  Not many people's
brains work that way.  It's hard enough to learn to treat complex numbers
as single entities.  Forth and Postscript require you to keep a really deep
stack in your brain to understand the code, and people aren't really very
good at doing that for more than three or four items (much fewer than 7 +/-
2).  Both of these are much more difficult for most people to work with and
reason about than something imperative and infix-based.

The fundamental problem is the impedance mismatch between the way most
people think (which would at the very least take a radical reframing of
curricula to change, and might not work anyway: look at the failure of the
New Math, which was indeed very elegant, taught mathematics from first
principles as set theory, and was not at all geared to the way young
children _actually learn things_) and where we can continue to squeeze
performance out of silicon.  This is really not tractable.  I think our
best hope is to make the silicon really good at generating and figuring out
graphs so it can dispatch lots of pieces of what feels like a sequential
problem in parallel and come out with the same answer as you would have
gotten doing it the naive one-step-at-a-time way.  But we've already done
that, and, yeah, it mostly works, but the abstraction is leaky and then you
get Meltdown and Spectre.

I don't have any answers other than "move to Montana, drop off the grid,
and raise dental floss."

Adam