Re: On: raising the semantic level of a program

2020-06-29 Thread Chuck Guzis via cctalk
On 6/29/20 12:41 PM, Rich Alderson via cctalk wrote:
> From: Chuck Guzis 
> Sent: Sunday, June 28, 2020 4:51 PM
> 
>> It's noteworthy that on the Univac 1100 series, a "byte" could be 6, 9
>> or 12 bits, but not 8. (36 bit words).  The PDP-10 had similar issues,
>> such as the "packed" string format of 5 7-bit characters per word, with
>> one bit unused.
> 
> Of course, on the PDP-10, bytes can be anywhere from 1 to 36 bits long;
> the size is defined in the pointer, not the hardware.
> 
> And in the 7-bit ASCII text format, bit 35 (the word is big-endian) *is*
> used by the default editor:  In order to allow line numbering in source
> files for languages which do not allow it, the line numbers are ASCII
> strings with bit 35 set, and the monitor (=kernel=operating system) strips
> them out before handing them to compilers' input streams.

My point being that implicit "byte" addressability isn't essential if
there are no instructions that can take advantage of it.  The PDP-10
does have halfword instructions and although they're called "byte"
instructions, are what I'd call "bit field" instructions (LDB, DPB,
etc.)--but nothing that would implicitly be called a "byte".

Whereas, on say, S/360, an 8-bit byte is intrinsic to the hardware and
so defines addressing granularity.

Character sets on other systems can be wildly variable; consider WPS-8,
for example.

You do what works--and the lack of byte granularity is no particular
hindrance.


--Chuck





RE: On: raising the semantic level of a program

2020-06-29 Thread Rich Alderson via cctalk
From: Chuck Guzis 
Sent: Sunday, June 28, 2020 4:51 PM

> It's noteworthy that on the Univac 1100 series, a "byte" could be 6, 9
> or 12 bits, but not 8. (36 bit words).  The PDP-10 had similar issues,
> such as the "packed" string format of 5 7-bit characters per word, with
> one bit unused.

Of course, on the PDP-10, bytes can be anywhere from 1 to 36 bits long;
the size is defined in the pointer, not the hardware.

And in the 7-bit ASCII text format, bit 35 (the word is big-endian) *is*
used by the default editor:  In order to allow line numbering in source
files for languages which do not allow it, the line numbers are ASCII
strings with bit 35 set, and the monitor (=kernel=operating system) strips
them out before handing them to compilers' input streams.

Rich


Rich Alderson
ex-Sr. Systems Engineer/Curator emeritus
Living Computers: Museum + Labs
2245 1st Ave S
Seattle, WA 98134

Cell: (206) 465-2916
Desk: (206) 342-2239

http://www.LivingComputers.org/

NB: This e-mail address will cease working after 1 July 2020. Use 
l...@alderson.users.panix.com for future communications.




Re: On: raising the semantic level of a program

2020-06-28 Thread dwight via cctalk
For the various flow control works, Forth generally teaches you to not spread 
flow control across multiple pages. Unless you are really trying to reach for 
every last clock cycle, it makes sense to factor even assembly code. It makes 
understanding the code easier and allows one to think about what is happening 
in the code.
Of course, good choices of names make sense. One can even make a page long 
macro to paste in-line a block of code for a zero cycle cost factoring.
Factoring is the key to such things. A page of straight assembly is a bad idea 
in the first place.
I don't see creating labels spread across several pages any better then doing 
good factoring.
Dwight


From: cctalk  on behalf of ben via cctalk 

Sent: Sunday, June 28, 2020 5:32 PM
To: cctalk@classiccmp.org 
Subject: Re: On: raising the semantic level of a program

On 6/28/2020 5:20 PM, Peter Corlett via cctalk wrote:
> On Sun, Jun 28, 2020 at 01:32:02PM -0700, Chuck Guzis via cctalk wrote:
> [...]
>> Why is byte-granularity in addressing a necessity?
>
> Because C's strings are broken by design and require one to be able to form a
> pointer to individual characters.
>

Just what is a NON broken string design?

>> It's only an issue if you have instructions that operate directly on byte
>> quantities in memory.
>
> One wheeze is to just declare that bytes are the same size as a machine word. 
> C
> doesn't require char to be exactly 8 bits, but merely at least 8 bits. 
> However,
> a lot of C code will break if char, short, int and long aren't exactly the 
> same
> size as they are on x86. Mind you, a lot of it is still broken even if they
> are...

DISC I/O seems to have stage packed formats of bytes shorts and longs
all packed a character array. Directory structures or inodes come to mind.

The x86 is broken period.
'int' size depends on the compiler making porting programs interesting,
as well as macro processor and make files.
Ben.






Re: On: raising the semantic level of a program

2020-06-28 Thread ben via cctalk

On 6/28/2020 5:20 PM, Peter Corlett via cctalk wrote:

On Sun, Jun 28, 2020 at 01:32:02PM -0700, Chuck Guzis via cctalk wrote:
[...]

Why is byte-granularity in addressing a necessity?


Because C's strings are broken by design and require one to be able to form a
pointer to individual characters.



Just what is a NON broken string design?


It's only an issue if you have instructions that operate directly on byte
quantities in memory.


One wheeze is to just declare that bytes are the same size as a machine word. C
doesn't require char to be exactly 8 bits, but merely at least 8 bits. However,
a lot of C code will break if char, short, int and long aren't exactly the same
size as they are on x86. Mind you, a lot of it is still broken even if they
are...


DISC I/O seems to have stage packed formats of bytes shorts and longs 
all packed a character array. Directory structures or inodes come to mind.


The x86 is broken period.
'int' size depends on the compiler making porting programs interesting, 
as well as macro processor and make files.

Ben.






Re: On: raising the semantic level of a program

2020-06-28 Thread ben via cctalk

On 6/28/2020 5:18 PM, Jon Elson wrote:

On 06/28/2020 05:28 PM, ben via cctalk wrote:


Since punchcards I think had a 16 bit encoding, lack of byte data
was not big problem. Who used paper tape on a 360?
IBM punch cards had 12 rows of holes.  For alpha encoding, logic in the 
controller
converted that to EBCDIC or your machine's favorite internal character 
interpretation.


On the IBM 360, there was a straight binary encoding using only 8 bits 
for the data (80 bytes/card) or using all 12 bits of two character 
positions to encode 3 bytes.  that way, you got 120 bytes/card.


I don't know any way to get 16-bit encoding on punch cards of that 
format.  Maybe some other manufacturer's punch card format.


Bad choice of words. I looked at a IBM 1130 only for a short time and
all mostly remember you needed to convert data for every I/O device.

We had paper tape read and punch on a 360/50 at University or Missouri 
at Rolla.  It was used for compatibility with the Data General 
minicomputers there.  Only place I've ever seen paper tape on a 360.


Jon

Ben.



Re: On: raising the semantic level of a program

2020-06-28 Thread Chuck Guzis via cctalk
On 6/28/20 4:20 PM, Peter Corlett via cctalk wrote:
> On Sun, Jun 28, 2020 at 01:32:02PM -0700, Chuck Guzis via cctalk wrote:
> [...]
>> Why is byte-granularity in addressing a necessity?
> 
> Because C's strings are broken by design and require one to be able to form a
> pointer to individual characters.

That's not a barrier, as IIRC, the C standard doesn't call out a
specific format for pointers, other than that of NULL.

I know that there was a C for the CDC Cyber series, but I don't recall
the implementation details.

It's noteworthy that on the Univac 1100 series, a "byte" could be 6, 9
or 12 bits, but not 8. (36 bit words).  The PDP-10 had similar issues,
such as the "packed" string format of 5 7-bit characters per word, with
one bit unused.

 > One wheeze is to just declare that bytes are the same size as a
machine word. C
> doesn't require char to be exactly 8 bits, but merely at least 8 bits. 
> However,
> a lot of C code will break if char, short, int and long aren't exactly the 
> same
> size as they are on x86. Mind you, a lot of it is still broken even if they
> are...

Who cares about x86?  I suspect that there's a lot of x86 code out there
that assumes little-endian quantities as well.

If you want to run code written for the x86 platform, get an x86 system.

--Chuck




Re: On: raising the semantic level of a program

2020-06-28 Thread Chuck Guzis via cctalk
On 6/28/20 3:28 PM, ben via cctalk wrote:
> On 6/28/2020 2:32 PM, Chuck Guzis via cctalk wrote:
> 
>> Why is byte-granularity in addressing a necessity?  It's only an issue
>> if you have instructions that operate directly on byte quantities in
>> memory.
>>
> Why have bytes in the first place then? A packed string does count here.
> IBM started this mess with the 360 and 32 bits, and everybody
> followed.Is Fortran I/O the BIBLE on character data.
> IBM never seemed even to follow one character set encoding even with the
> similar machines like the IBM 1130.

There were lots of machines where the granularity was the word.  The CDC
6000/7000/Cyber 60 bit machines, for example.  The lack of byte
addressing did nothing to hold those machines back.  At one point, CDC
had the fastest COBOL implementation, even though it lacked byte
addressing and decimal arithmetic, something that you'd think would give
an edge to the S/360 series.   Chew on that one for awhile.

Punched cards (at least those of the 1130 era) used 12 bit encoding if
column-binary or 36 bit encoding if handling row-binary (e.g. 704; which
is why early programming languages used only the first 72 columns of a
card).

FWIW, I've also worked on bit-granular addressing systems, where bit
arrays were part of the instruction set.

--Chuck


Re: On: raising the semantic level of a program

2020-06-28 Thread Peter Corlett via cctalk
On Sun, Jun 28, 2020 at 01:32:02PM -0700, Chuck Guzis via cctalk wrote:
[...]
> Why is byte-granularity in addressing a necessity?

Because C's strings are broken by design and require one to be able to form a
pointer to individual characters.

> It's only an issue if you have instructions that operate directly on byte
> quantities in memory.

One wheeze is to just declare that bytes are the same size as a machine word. C
doesn't require char to be exactly 8 bits, but merely at least 8 bits. However,
a lot of C code will break if char, short, int and long aren't exactly the same
size as they are on x86. Mind you, a lot of it is still broken even if they
are...



Re: On: raising the semantic level of a program

2020-06-28 Thread Jon Elson via cctalk

On 06/28/2020 05:28 PM, ben via cctalk wrote:


Since punchcards I think had a 16 bit encoding, lack of 
byte data

was not big problem. Who used paper tape on a 360?
IBM punch cards had 12 rows of holes.  For alpha encoding, 
logic in the controller
converted that to EBCDIC or your machine's favorite internal 
character interpretation.


On the IBM 360, there was a straight binary encoding using 
only 8 bits for the data (80 bytes/card) or using all 12 
bits of two character positions to encode 3 bytes.  that 
way, you got 120 bytes/card.


I don't know any way to get 16-bit encoding on punch cards 
of that format.  Maybe some other manufacturer's punch card 
format.


We had paper tape read and punch on a 360/50 at University 
or Missouri at Rolla.  It was used for compatibility with 
the Data General minicomputers there.  Only place I've ever 
seen paper tape on a 360.


Jon


Re: On: raising the semantic level of a program

2020-06-28 Thread ben via cctalk

On 6/28/2020 2:32 PM, Chuck Guzis via cctalk wrote:


Why is byte-granularity in addressing a necessity?  It's only an issue
if you have instructions that operate directly on byte quantities in memory.

Why have bytes in the first place then? A packed string does count here. 
IBM started this mess with the 360 and 32 bits, and everybody 
followed.Is Fortran I/O the BIBLE on character data.
IBM never seemed even to follow one character set encoding even with the 
similar machines like the IBM 1130.



--Chuck

We don't have opcode space (now that we are 32 bits) to give a byte 
LOAD/STORE but you can buy extra hardware for string functions seems to 
have been the marketing idea back then.


Since punchcards I think had a 16 bit encoding, lack of byte data
was not big problem. Who used paper tape on a 360?
Ben.




Re: On: raising the semantic level of a program

2020-06-28 Thread Chuck Guzis via cctalk
On 6/28/20 12:39 PM, ben via cctalk wrote:
> On 6/28/2020 12:06 PM, dwight via cctalk wrote:

> Since I am working on a 1970's style computer ( blinking lights,
> front panel,core memory, big rack with I/O devices) currently
> being emulated in FPGA,I have been looking things from that era
> rather than the modern stuff. It is sure hard to find a 16 or 32 bit cpu
> that has simple byte accessing from that era. That may
> be one of the reasons the PDP 11 and/or Unix developed the growth
> of more modern software and programming ideas.

Why is byte-granularity in addressing a necessity?  It's only an issue
if you have instructions that operate directly on byte quantities in memory.

--Chuck



Re: On: raising the semantic level of a program

2020-06-28 Thread ben via cctalk

On 6/28/2020 12:06 PM, dwight via cctalk wrote:

Overloading is always a problem in Forth. It is so easy to do that one 
sometimes loses the context. I was writing an assembler for my 4004 project. I 
wanted to overload words like IF THEN for cleaner to read assembly so I didn't 
have a lot of branch labels. I like indenting to show beginning and ends of 
program flow control. I don't like doing things like IFa and THENa or such to 
show the context. I still wanted to write my macros in Forth and not in 
assembly, so I made a word Macro: to replace : so that it would automatically 
keep the context straight. It is a crutch but sometimes a well placed crutch 
can solve such context problems.
I now have my "CARRY IF ... THEN" in assembler and still can easily switch to Forth's 
"IF ... THEN" for making macro commands that need flow control and are for completely 
different context than the assembled code.
Dwight


A elseif would be easy to add I suspect to forth
I like the
"IF exep statements { EIF exep statments } {ELSE } ENDIF"
"WHILE exep statements REPEAT"
control structures because while { if {} else { if {} else .. }}
tends to be hard to find just what the last } belongs to if you have 
several screens of code.


Now days you need to have more code to handle windows and other I/O than 
 pure problem solving as languages keep evolving.


Since I am working on a 1970's style computer ( blinking lights,
front panel,core memory, big rack with I/O devices) currently
being emulated in FPGA,I have been looking things from that era
rather than the modern stuff. It is sure hard to find a 16 or 32 bit cpu 
that has simple byte accessing from that era. That may

be one of the reasons the PDP 11 and/or Unix developed the growth
of more modern software and programming ideas.
Ben.








Re: On: raising the semantic level of a program

2020-06-28 Thread Jecel Assumpcao Jr via cctalk
ben wrote on Sat, 27 Jun 2020 19:15:25 -0600
> It would be nice if one could define a new language for problem
> solving and run it through compiler-compiler processor for interesting 
> problems.

That is what Alan Kay's group did a few year ago in their "STEPS"
project. They wanted to implement a modern computing system in only 20
thousand lines of code, so they did a compiler-compiler called OMeta
inspired by the classic Meta II system and defined specialized languages
for each part of the system.

For networking, for example, they defined a language that was actually
the ASCII diagrams for packet formats from the RFCs. For graphics they
defined a sort of "APL meets streams" language called Nile in which a
Cairo-like graphics system (called Gezzira) could be described in just a
few hundred lines of code.

http://vpri.org/writings.php

Look for the texts with "STEPS" in the title - those are the yearly
reports to the NSF (2007 to 2012).

-- Jecel


Re: On: raising the semantic level of a program

2020-06-28 Thread dwight via cctalk
Overloading is always a problem in Forth. It is so easy to do that one 
sometimes loses the context. I was writing an assembler for my 4004 project. I 
wanted to overload words like IF THEN for cleaner to read assembly so I didn't 
have a lot of branch labels. I like indenting to show beginning and ends of 
program flow control. I don't like doing things like IFa and THENa or such to 
show the context. I still wanted to write my macros in Forth and not in 
assembly, so I made a word Macro: to replace : so that it would automatically 
keep the context straight. It is a crutch but sometimes a well placed crutch 
can solve such context problems.
I now have my "CARRY IF ... THEN" in assembler and still can easily switch to 
Forth's "IF ... THEN" for making macro commands that need flow control and are 
for completely different context than the assembled code.
Dwight


From: cctalk  on behalf of Peter Corlett via 
cctalk 
Sent: Sunday, June 28, 2020 3:00 AM
To: General Discussion: On-Topic and Off-Topic Posts 
Subject: Re: On: raising the semantic level of a program

On Sat, Jun 27, 2020 at 07:15:25PM -0600, ben via cctalk wrote:
[...]
> At what point do variable names end being comments? There needs to be more
> work on proper documenting and writing programs and modules.

What, auto-generated "documentation" which just lists function names and type
signatures is not useful? This is news to pretty much every Java project I've
had the misfortune of interacting with.

> I am not a fan of objects and operator overloading because I never know just
> what the program is doing. apples + oranges gives me what ? count of fruits,
> liters of fruit punch, a error?

That does of course depend on the strictness of the language's type system and
whether the developer has exercised good taste and discretion when using
operator overloading in their API. I would normally expect the compiler to
reject attempts to add two incompatible types, but this is often a triumph of
hope over experience. (But avoid PHP, JavaScript, and similar junk languages
hacked together in a Coke-fuelled bender to solve the immediate problem, and
you're 90% of the way there.)

> It would be nice if one could define a new language for problem solving and
> run it through compiler-compiler processor for interesting problems.

I'm unclear on what you're trying to say here.

Source-to-source translators are of course a well-trodden path, such as early
C++ "compilers" which emitted C. A weaker variant is to abuse operator
overloading to create a minilanguage that is directly compilable without
translation. Such corner-cutting techniques are useful for prototyping new
ideas, but tend to cause more trouble than they are worth if used as-is in
production.

My day job currently involves PureScript, a Haskell-inspired language which is
translated into JavaScript. It is quite an experience.



Re: On: raising the semantic level of a program

2020-06-28 Thread Peter Corlett via cctalk
On Sat, Jun 27, 2020 at 07:15:25PM -0600, ben via cctalk wrote:
[...]
> At what point do variable names end being comments? There needs to be more
> work on proper documenting and writing programs and modules.

What, auto-generated "documentation" which just lists function names and type
signatures is not useful? This is news to pretty much every Java project I've
had the misfortune of interacting with.

> I am not a fan of objects and operator overloading because I never know just
> what the program is doing. apples + oranges gives me what ? count of fruits,
> liters of fruit punch, a error?

That does of course depend on the strictness of the language's type system and
whether the developer has exercised good taste and discretion when using
operator overloading in their API. I would normally expect the compiler to
reject attempts to add two incompatible types, but this is often a triumph of
hope over experience. (But avoid PHP, JavaScript, and similar junk languages
hacked together in a Coke-fuelled bender to solve the immediate problem, and
you're 90% of the way there.)

> It would be nice if one could define a new language for problem solving and
> run it through compiler-compiler processor for interesting problems.

I'm unclear on what you're trying to say here.

Source-to-source translators are of course a well-trodden path, such as early
C++ "compilers" which emitted C. A weaker variant is to abuse operator
overloading to create a minilanguage that is directly compilable without
translation. Such corner-cutting techniques are useful for prototyping new
ideas, but tend to cause more trouble than they are worth if used as-is in
production.

My day job currently involves PureScript, a Haskell-inspired language which is
translated into JavaScript. It is quite an experience.



Re: On: raising the semantic level of a program

2020-06-28 Thread Stan Sieler via cctalk
Hi Dwight,

Yes...I agree, sounds like how FORTH works.

BTW, I co-implemented a FORTH for the IBM PC, back when the first IBM PC
was released.
(Next Generation Systems FORTH ... 25% faster than the prior speed leader,
Laboratory Microsystems FORTH,
and it had a lot of nice concepts, like a true/accurate decompiler.)
My co-author was Carl Sassenrath, who then went on to write the kernel of
the AMIGA operating system,
and (later) the Rebol language.  (Carl got the inner loop down to 3
instructions vs. LM's 4 instructions :)

Re: LISP ...
Yes, particularly with the advent of BBN-LISP (later named INTERLISP, then
INTERLisp) ...  it had DWIM (Do What I Mean),
and a number of really neat things.  I co-implemented the Burroughs B6500
version on an ARPA contract circa 1973,
so I got to interact with the BBN people a lot, including Danny Bobrow
(spaghetti stacks), who had recently moved
to Xerox PARC, IIRC.
One of the things in INTERLISP was an optional package that implemented
"normal" looking arithmetic expressions,
so one could do something like:  (SETQ FOO (ARITH x * y + 3))
instead of (SETQ (FOO (+ (* x y) 3)))
(nearly 50 year old memory...it might have been higher level, like letting
me do:  (foo = x * y + 3))

I recently found a 1978 version of our INTERLISP source code!
(both the normal interpreter, and our p-code interpreter ... not sure if
the LISP-to-pcode compiler is there)

thanks,

Stan


Re: On: raising the semantic level of a program

2020-06-27 Thread ben via cctalk

On 6/27/2020 8:49 AM, dwight via cctalk wrote:

Hi Stan
  It sounds a little like the way most good Forth programmers deal with 
problems. Forth is all about semantics. Every thing is a word. The complexity 
of the program is left to the low level stuff. When the program is done, it 
isn't a program written in Forth, it is a program written in the application. 
At the highest levels one should not see the language it was written in one 
should only see the application. Well written Forth at the upper levels has 
only : and ; of the language showing through. The rest are just the words in a 
sentence like structure telling what the application is doing.
It is too bad that people insist on using languages that even at the highest 
level is still just a program in that particular language. All the boiler plate 
is still in the way of the program.
Although, taking a little more time to get used to, Lisp is something like that 
as well. At least well written Lisp is. One can see what the intent is at the 
higher levels of coding. It is just learning to read the sentences. The lower 
level language part is how you move the bits and bytes around. The application 
should tell you what it does and why. Comments should only be needed at the 
more confusing lower levels. At the higher levels comments would and should be 
redundant. The words should tell you what is being done.
Dwight


At what point do variable names end being comments?
There needs to be more work on proper documenting and writing
programs and modules. I am not a fan of objects and operator overloading 
because I never know just what the program is doing.
apples + oranges gives me what ? count of fruits, liters of fruit punch, 
a error?

It would be nice if one could define a new language for problem
solving and run it through compiler-compiler processor for interesting 
problems.

Ben.






Re: On: raising the semantic level of a program

2020-06-27 Thread dwight via cctalk
Hi Stan
 It sounds a little like the way most good Forth programmers deal with 
problems. Forth is all about semantics. Every thing is a word. The complexity 
of the program is left to the low level stuff. When the program is done, it 
isn't a program written in Forth, it is a program written in the application. 
At the highest levels one should not see the language it was written in one 
should only see the application. Well written Forth at the upper levels has 
only : and ; of the language showing through. The rest are just the words in a 
sentence like structure telling what the application is doing.
It is too bad that people insist on using languages that even at the highest 
level is still just a program in that particular language. All the boiler plate 
is still in the way of the program.
Although, taking a little more time to get used to, Lisp is something like that 
as well. At least well written Lisp is. One can see what the intent is at the 
higher levels of coding. It is just learning to read the sentences. The lower 
level language part is how you move the bits and bytes around. The application 
should tell you what it does and why. Comments should only be needed at the 
more confusing lower levels. At the higher levels comments would and should be 
redundant. The words should tell you what is being done.
Dwight


From: cctalk  on behalf of Stan Sieler via 
cctalk 
Sent: Thursday, June 25, 2020 3:14 PM
To: General Discussion: On-Topic and Off-Topic Posts 
Subject: On: raising the semantic level of a program

Hi,

Not hardware ... but an antique software / programming concept.

Some decades ago (circa late 1970s?), I *think* I came across a concept of
"raising the semantic level" of a program by using defines/macros and newly
written library functions.  The concept was that a given language provided
a particular level of semantics.  By judicious/clever use of things like
macros, one could "raise" the level of semantics, effectively appearing to
add new features to the language (or, in this case, the instance of the
language as used in the program).

I *thought* I got that concept from Terry Wingrad's excellent "Breaking the
Complexity Barrier again" (Nov, 1974,
https://dl.acm.org/doi/10.1145/951761.951764 )
...but, no.  It's not in that paper.

Does the concept ring a bell?

Can anyone provide a pointer to where I might have seen it?

It's formed the basis of my own personal programming philosophy for nearly
50 years, and I want to know where I found it, or if I might have thought
of it myself.

thanks!

Stan


Re: On: raising the semantic level of a program

2020-06-26 Thread Stan Sieler via cctalk
A friend kindly searched and found an interesting paper from 1973,

Programming by semantic refinement
 JB Morris - ACM SIGPLAN
Notices, 1973 - dl.acm.org.
https://dl.acm.org/doi/pdf/10.1145/390014.808298

While an interesting paper, it's going the opposite direction (essentially,
going from an English language description down to a final programming
language).

But, using the L1 (highest level language), L2, ..., Ln (lowest level
language) concept, I can phrase my concept better ... so ...

Most programmers write at, say, the level of L3.
They might write something like:

   mem [foo].head = something

My "raising the semantic level" would be:

#define HEAD(x).  mem [x].head
   ...
HEAD (foo) = something

With a fair set of macros like that (HEAD, TAIL, etc), the program is now
effectively written in a "new" language, L2 (a higher level language than
L1).

Being written in L2, the resulting code is more readable to everyone,
partially because they aren't continually seeing the implementation of how
".head" / "mem" work/interact.

In effect, the programmer has added a feature (linked list handling,
perhaps) to L3 ... for that particular program, seemingly extending/raising
the level of the language.

It's that concept that I thought I saw sometime in the early 1970s :)

thanks,

Stan