Re: strangest systems I've sent email from

2016-05-26 Thread Fred Cisin

On Fri, 27 May 2016, Liam Proven wrote:

So I was broadly right that the 8088/8086 sit somewhere on the
dividing line? That at least is good to know!


Of course.
That is exactly the point.
How you draw the line, determines which side it will fall, and it is right 
in the middle, so many otherwise reasonable definitions will differ on 
that group of chips, since it IS in between an 8 bit and 16 bit.
Either an 8 bit that has been expanded to SOME 16 bit capabilities, or a 
16 bit that has been cut down into an 8 bit.




Re: strangest systems I've sent email from

2016-05-26 Thread Liam Proven
On 24 May 2016 at 23:10, Fred Cisin  wrote:
> Whether 8088 was an "8 bit" or "16 bit" processor depends heavily on how you
> define those.
> Or, you could phrase it, that the 8 bit processors at the time handled 64KiB
> of RAM.


OK, thank you all for the responses.

Rarely have I felt so lectured and indeed talked-down-to in CCmp. :-D

No, it's a fair cop, I egregiously over-simplified my comment.

So let me try to address (haha) that.

Most 8-bit CPUs that I knew of had a 16-bit address bus, and thus were
limited to 64kB of physical memory (excluding bank switching )

Most 16-bit CPUs I knew of (ignoring issues of internal ALU width
etc.) had 24-bit address buses and could thus handle 16MB of physical
memory. This includes cut-down internally-32-bit-wide devices such as
the 80386SX and 68000.

The 8088/8086 had a 20-bit address bus, differing mainly in the width
of the *data* bus, and the later 80286 had a 16-bit address bus.

So, yes, generally, 8-bitters could handle 64kB but 16-bitters 16MB.
As far as memory *size* considerations go, the width of the data bus,
multiplexing or multicycle accesses etc. are not germane to the
quantity of addressable memory.

So I was broadly right that the 8088/8086 sit somewhere on the
dividing line? That at least is good to know!

-- 
Liam Proven • Profile: http://lproven.livejournal.com/profile
Email: lpro...@cix.co.uk • GMail/G+/Twitter/Flickr/Facebook: lproven
MSN: lpro...@hotmail.com • Skype/AIM/Yahoo/LinkedIn: liamproven
Cell/Mobiles: +44 7939-087884 (UK) • +420 702 829 053 (ČR)


Re: strangest systems I've sent email from

2016-05-24 Thread ben

On 5/24/2016 3:32 PM, Swift Griggs wrote:

On Tue, 24 May 2016, Fred Cisin wrote:

(OB_Picky:  Due to the overlap of segment and offset, on machines that had 21
address bits, real mode actually had a maximum of 1114096 (10FFF0h) bytes,
instead of 1048576 (10h).


This was always the biggest pustule on the facade of x86 to me. Gate A20
and other chicanery was nasty business. It always struck me as a hardware
hack to work around earlier bad design. Sure, you can eschew segmentation
and try to use multiple instructions to delivery some flat addressing, and
then your code was snail-slow. Real mode in 16 bit on x86 was/is some
fairly vulgar stuff due to segmentation (hate hate hate). Then it was made
"all better now" by protected mode and segment descriptors later *pat
pat*. Yeah. Ugh. Pleah. Ick.


I think Windows made it big because they could handle the flat 386 model 
and dos did not.



All that fun sent me running into the arms of the M68k and it's git, and
later MIPS (queue hallelujah chrous from the clouds). I'm not a MIPS god
(we have a some here), but much love and respect to the architecture
nonetheless. I know enough to know "that's the good stuff". Nowadays I
wonder, since I'm using flat memory on the Unix boxes I code in (now
pretty much just in C, I haven't done ASM in a long while), what kind of
masochist maintains the SLAB/SLUB allocators for x86 Unix variants these
days. I want to buy them a six pack, pat them on the back, and say "you're
a braver man than, I."


I never could buy the RISC argument of being a  faster design.
Data access to main memory is allays what slows a system down.


-Swift


Still playing around with TTL macros with Altera FPGA here,
planning mid 1970's TTL CPU. The cleanest version so far
is 2 -16 LS bit ALU boards and 1 control card. This gives
me a CPU with 2 sizes of data - BYTE and WORD and a WHOPPING
5 registers. PC, AC1 , AC2 , INDEX , STACK.
No risc here, just a simple CPU.


Ben.
PS: NO blinking lights sadly.


Re: strangest systems I've sent email from

2016-05-24 Thread Swift Griggs
On Tue, 24 May 2016, Guy Sotomayor Jr wrote:
> The initial implementation of the A20 gate was implemented by the 
> keyboard controller(!) because it was discovered late in the PC AT 
> development cycle and we couldn?t add more logic to the board (but we 
> could add some wires).

That's very bizzare. It still makes me feel dirty just thinking about it, 
but it's interesting nonetheless. I wonder about some of the "clever hack" 
software that squeezed a tad bit more memory by dancing around/in 
previously reserved memory. Isn't that sort of how Quarterdeck got started 
? I also remember XMS and EMS and all that fun, though the Amiga geek 
inside me is screaming that I shouldn't reveal that.

-Swift


Re: strangest systems I've sent email from

2016-05-24 Thread Guy Sotomayor Jr

> On May 24, 2016, at 2:32 PM, Swift Griggs  wrote:
> 
> On Tue, 24 May 2016, Fred Cisin wrote:
>> (OB_Picky:  Due to the overlap of segment and offset, on machines that had 21
>> address bits, real mode actually had a maximum of 1114096 (10FFF0h) bytes,
>> instead of 1048576 (10h).
> 
> This was always the biggest pustule on the facade of x86 to me. Gate A20 
> and other chicanery was nasty business. It always struck me as a hardware 
> hack to work around earlier bad design. Sure, you can eschew segmentation 
> and try to use multiple instructions to delivery some flat addressing, and 
> then your code was snail-slow. Real mode in 16 bit on x86 was/is some 
> fairly vulgar stuff due to segmentation (hate hate hate). Then it was made 
> "all better now" by protected mode and segment descriptors later *pat 
> pat*. Yeah. Ugh. Pleah. Ick.

As someone who was there and participated in the whole A20 gate issue,
I can tell you why it was there (and Intel in it’s most recent chipsets finally
eliminated it).  It was for SW compatibility.

There was a well know piece of software (at the time…who’s name escapes
me) that make use of the fact that on the 8088, addresses wrapped around
at 1MB.  They took great advantage of it.  Unfortunately when IBM was
working on the PC AT with a 80286 (with 24-bit physical addresses) the
clever hack no longer worked.  So we had to put in a way to emulate the
wrap-around at 1MB…that was the A20 gate.

The initial implementation of the A20 gate was implemented by the
keyboard controller(!) because it was discovered late in the PC AT
development cycle and we couldn’t add more logic to the board (but we
could add some wires).

TTFN - Guy

Re: strangest systems I've sent email from

2016-05-24 Thread Guy Sotomayor Jr

> On May 24, 2016, at 1:29 PM, Liam Proven  wrote:
> 
> On 22 May 2016 at 04:52, Guy Sotomayor Jr  wrote:
>> Because the 808x was a 16-bit processor with 1MB physical addressing.  I
>> would argue that for the time 808x was brilliant in that most other 16-bit
>> micros only allowed for 64KB physical.
> 
> 
> Er, hang on. I'm not sure if my knowledge isn't good enough or if that's a 
> typo.
> 
> AFAIK most *8* bits only supported 64 kB physical. Most *16* bits
> (e.g. 68000, 65816, 80286, 80386SX) supported 16MB physical RAM.
> 
> Am I missing something here?
> 
> I always considered the 8088/8086 as a sort of hybrid 8/16-bit processor.
> 

My definition of a CPU’s bitness is the native register width and not the
bus width (or ALU width).

From that definition, the 8088/8086 are 16-bit CPUs.  I would certainly
consider the 68K, etc to be 32-bit CPUs.  The 80286 was definitely a 16-bit
CPU and *any* 80386 (SX, DX, whatever) are most definitely 32-bits.

Your argument would say that most of the low end IBM 360’s would be
16-bit machines which is insane.

TTFN - Guy



Re: strangest systems I've sent email from

2016-05-24 Thread Fred Cisin

On Tue, 24 May 2016, Swift Griggs wrote:

This was always the biggest pustule on the facade of x86 to me. Gate A20
and other chicanery was nasty business. It always struck me as a hardware
hack to work around earlier bad design.

to work around earlier LIMITED design.

IFF 64K is reasonable for you, then x86 is excellent.

If you barely need a little more (a few 64K blocks), then it is OK.
(such as XenoCopy)

If you really need more, then it is a PITA.


Keep in mind, that it was coming into a 64K world, and wasn't intended to 
last much past that.
If you are trying to use a 64K address processor in a multi-megabyte 
world, it's going to be a miserable misfit.

(like trying to use an air-cooled VW bus for commercial hauling)

Yeah, we use what's handy, even if it's not really suitable.



Re: strangest systems I've sent email from

2016-05-24 Thread Sean Conner
It was thus said that the Great Fred Cisin once stated:
> On 22 May 2016 at 04:52, Guy Sotomayor Jr  wrote:
> >Because the 808x was a 16-bit processor with 1MB physical addressing.  I
> >would argue that for the time 808x was brilliant in that most other 16-bit
> >micros only allowed for 64KB physical.
> 
> Whether 8088 was an "8 bit" or "16 bit" processor depends heavily on how 
> you define those.
> Or, you could phrase it, that the 8 bit processors at the time handled 
> 64KiB of RAM.  The 808x still could see only 64KiB at a time, but let you 
> place that 64kiB almost anywhere that you wanted in a total RAM space of 
> 1MiB, and let you set 4 "preset" locations (CS, DS, SS, ES).  There were 
> some instructions, such as MOV, that could sometimes operate with 2 of 
> those presets.
> Thus, they expanded a 64KiB RAM processor to 1MiB, with minimal internal 
> changes.

  To further explain this.  The 8086 (and 8088) was internally a 16-bit CPU. 
Since you could only address 64K, Intel used four "segment" registers to get
around this limit.  The four registers are CS, DS, ES and SS, and were the
upper 16 bits of the 20-bit address [1].  A physical address was calculated
as:

+++
|  16-bit segment |
+++

+++
+   | 16-bit offset   |
+++
=

   ++++
   | 20-bit physical addr |
   ++++

  Instructions were read from CS:IP (CS segment, IP register), most reads
and write to data sent to DS:offset, with some exceptions.  Pushes and pops
to the stack went to SS:SP and reads/writes with the BP register also used
the SS segment (SS:BP).  The string instructions used DS:SI as the source
address and ES:DI as the destination address.  And you could override the
default segment for a lot of instructions.  So:

mov ax,[bx] -- SRC address is DS:BX
mov es:[bx],ax  -- DEST address is ES:BX

  Technically, you could address as much as 256K without changing the
segment registers.

  I got used to this, but I still preferred programming on the 6809 (8-bit
CPU) and 68000.  *Much* nicer architectures.

  -spc (And the 80386 introduced paging to this mess ... )

[1] Starting with the 80286, in proected mode, they are treated
differently and no longer point to a physical address.  But that's
beyond the scope of *this* message.


Re: strangest systems I've sent email from

2016-05-24 Thread Swift Griggs
On Tue, 24 May 2016, Fred Cisin wrote:
> (OB_Picky:  Due to the overlap of segment and offset, on machines that had 21
> address bits, real mode actually had a maximum of 1114096 (10FFF0h) bytes,
> instead of 1048576 (10h).

This was always the biggest pustule on the facade of x86 to me. Gate A20 
and other chicanery was nasty business. It always struck me as a hardware 
hack to work around earlier bad design. Sure, you can eschew segmentation 
and try to use multiple instructions to delivery some flat addressing, and 
then your code was snail-slow. Real mode in 16 bit on x86 was/is some 
fairly vulgar stuff due to segmentation (hate hate hate). Then it was made 
"all better now" by protected mode and segment descriptors later *pat 
pat*. Yeah. Ugh. Pleah. Ick.

All that fun sent me running into the arms of the M68k and it's git, and 
later MIPS (queue hallelujah chrous from the clouds). I'm not a MIPS god 
(we have a some here), but much love and respect to the architecture 
nonetheless. I know enough to know "that's the good stuff". Nowadays I 
wonder, since I'm using flat memory on the Unix boxes I code in (now 
pretty much just in C, I haven't done ASM in a long while), what kind of 
masochist maintains the SLAB/SLUB allocators for x86 Unix variants these 
days. I want to buy them a six pack, pat them on the back, and say "you're 
a braver man than, I."

-Swift



Re: strangest systems I've sent email from

2016-05-24 Thread Sean Conner
It was thus said that the Great Liam Proven once stated:
> On 22 May 2016 at 04:52, Guy Sotomayor Jr  wrote:
> > Because the 808x was a 16-bit processor with 1MB physical addressing.  I
> > would argue that for the time 808x was brilliant in that most other 16-bit
> > micros only allowed for 64KB physical.
> 
> Er, hang on. I'm not sure if my knowledge isn't good enough or if that's a 
> typo.
> 
> AFAIK most *8* bits only supported 64 kB physical. Most *16* bits
> (e.g. 68000, 65816, 80286, 80386SX) supported 16MB physical RAM.
> 
> Am I missing something here?

  It really depends on how you view a CPU, from a hardware or software
perspective.  From a software perspective, a 68000 was a 32-bit
architecture.  From a hardware perspective, the 68000 had a 16-bit bus and
24 physical address lines and I'm sure at the time (1979) that those
hardware limits were more due to costs and manufactoring ability (a 68-pin
chip was *huge* at the time) (furthermore, the 68008 was still ineternally a
32-bit architecture but only had an 8-bit external data bus---does this mean
it's an 8bit CPU?).  The 68020 (I'm not sure about the 68010) had a 32-bit
physical address bus.

  You are right in that most 8-bit CPUs supported only 16 bits for a
physical address but there were various methods to extend that [1] but
limited to 64k at a time.

> I always considered the 8088/8086 as a sort of hybrid 8/16-bit processor.

  Again, internally, the 8088 was a 16-bit archtecture but with an 8-bit
external data bus (and a 20-bit physical address space).  The 8086 had a
full 16-bit external data bus (and still 20-bit address space) and thus, was
a bit more expensive (not in CPU cost but more in external support with the
motherboard and memory bus).  The 80286 still had an external 16-bit bus but
had 24-physical address lines (16MB).

  The 8088/8086 could address 1MB of memory.  The reason for the 640k limit
was due to IBM's implementation of the PC, chopping off the upper 384K for
ROM and video memory.  MS-DOS could easily use more, but it had to be a
consecutive block.  Some PCs in the early days of the clones did allow as
much as 700-800K of RAM for MS-DOS, but they weren't 100% IBM PC compatible
(BIOS yeah, but not hardware wise) and thus, were dead ends at the time.

  -spc

[1] Bank switching was one---a special hardware register (either I/O
mapped or memory mapped, depends upon the CPU) to enable/disable
banks of memory.



Re: strangest systems I've sent email from

2016-05-24 Thread Fred Cisin

On 22 May 2016 at 04:52, Guy Sotomayor Jr  wrote:

Because the 808x was a 16-bit processor with 1MB physical addressing.  I
would argue that for the time 808x was brilliant in that most other 16-bit
micros only allowed for 64KB physical.


Whether 8088 was an "8 bit" or "16 bit" processor depends heavily on how 
you define those.
Or, you could phrase it, that the 8 bit processors at the time handled 
64KiB of RAM.  The 808x still could see only 64KiB at a time, but let you 
place that 64kiB almost anywhere that you wanted in a total RAM space of 
1MiB, and let you set 4 "preset" locations (CS, DS, SS, ES).  There were 
some instructions, such as MOV, that could sometimes operate with 2 of 
those presets.
Thus, they expanded a 64KiB RAM processor to 1MiB, with minimal internal 
changes.



As opposed to starting over from scratch, ala Motorola.

Starting over from scratch made it possible to build an arguably better 
processor, whereas doing minimal changes reduced (and sometimes even 
eliminated) the need to rewrite existing code.
Which is more important - theoretically better processor, or availability 
of existing software?   Not everybody would choose the same.


It was claimed that in porting Wordstar to PC, Micropro took longer to 
change the manual than to get the software working.
Accordingly, software availability for the PC was more "immediate" than 
for the Mac.   Both had extensive software availability by the end of the 
1980s, but how about during the first 6 months after release of the 
machine?


Since "Nobody programs in assembly any more, nor ever will again" 
(-Clancy/Harvey), that isn't as big a deal as it once was.




(OB_Picky:  Due to the overlap of segment and offset, on machines that had 
21 address bits, real mode actually had a maximum of 1114096 (10FFF0h) 
bytes, instead of 1048576 (10h).  That was accessed by HIMEM.SYS at a 
time when memory space was scarce enough that another 64K could make a 
big difference.)




Re: strangest systems I've sent email from

2016-05-24 Thread Liam Proven
On 22 May 2016 at 04:52, Guy Sotomayor Jr  wrote:
> Because the 808x was a 16-bit processor with 1MB physical addressing.  I
> would argue that for the time 808x was brilliant in that most other 16-bit
> micros only allowed for 64KB physical.


Er, hang on. I'm not sure if my knowledge isn't good enough or if that's a typo.

AFAIK most *8* bits only supported 64 kB physical. Most *16* bits
(e.g. 68000, 65816, 80286, 80386SX) supported 16MB physical RAM.

Am I missing something here?

I always considered the 8088/8086 as a sort of hybrid 8/16-bit processor.


-- 
Liam Proven • Profile: http://lproven.livejournal.com/profile
Email: lpro...@cix.co.uk • GMail/G+/Twitter/Flickr/Facebook: lproven
MSN: lpro...@hotmail.com • Skype/AIM/Yahoo/LinkedIn: liamproven
Cell/Mobiles: +44 7939-087884 (UK) • +420 702 829 053 (ČR)


Re: C standards and committees (was Re: strangest systems I've sent email from)

2016-05-23 Thread Chris Hanson
On May 20, 2016, at 8:58 PM, Sean Conner  wrote:
> 
> Now, it may sound cavalier of me, but of the three compilers I use at
> work (gcc, clang, Solaris Sun Works thingy) I know how to get them to layout
> the structs exactly as I need them

You can do this in Common Lisp too, though. It sounds more like the way C lets 
you represent the hardware matches closely with how you think about the 
hardware.

  -- Chris



Re: strangest systems I've sent email from

2016-05-23 Thread Maciej W. Rozycki
On Sun, 22 May 2016, Mouse wrote:

> > How can you have the type of `size_t' wider than the widest unsigned
> > integer type in the respective revision of the language standard?
> 
> unsigned long long int isn't necessarily the largest integral type.
> Nor do I see anything requiring size_t to be no larger than it.

 Right, an implementation is free to add its own extended integer types 
and `size_t' is not required to have the same width as one of the standard 
integer types.  There's a recommendation for `size_t' not to be wider than 
`long' (unless necessary, heh), however that's just it, not mandatory.

> uintmax_t, on the other hand, would be fine; it _is_ promised to be no
> smaller than size_t (or any other unsigned integral type).
> 
>   size_t foo = ~(uintmax_t)0;
> 
> should work fine to set foo to all-bits-set.  (Since size_t is
> unsigned, this will set it to be its largest possible value.)

 But there's no `uintmax_t' in C89.  If playing with casts already, I 
think the most obvious solution is simply:

size_t foo = ~(size_t)0;

  Maciej


Re: strangest systems I've sent email from

2016-05-22 Thread Mouse
>>>  Why bother?  Won't:
>>> size_t foo = ~0UL;
>>> do (~0ULL for C99)?

>> Only if size_t is no larger than unsigned long int (unsigned long
>> long int for the ULL version).  I don't think that's guaranteed.

> How can you have the type of `size_t' wider than the widest unsigned
> integer type in the respective revision of the language standard?

unsigned long long int isn't necessarily the largest integral type.
Nor do I see anything requiring size_t to be no larger than it.

uintmax_t, on the other hand, would be fine; it _is_ promised to be no
smaller than size_t (or any other unsigned integral type).

size_t foo = ~(uintmax_t)0;

should work fine to set foo to all-bits-set.  (Since size_t is
unsigned, this will set it to be its largest possible value.)

I think uintmax_t is another blemish in the standard, since it makes
certain kinds of innovation impossible.  (For example, the presence of
uintmax_t makes it impossible to extend the compiler to recognize
uintXX_t as "unsigned integer type with at least XX bits" for all XX,
presumably with library support for integers over hardware-supported
sizes.  At least without no longer, strictly, being a C compiler.)

/~\ The ASCII Mouse
\ / Ribbon Campaign
 X  Against HTMLmo...@rodents-montreal.org
/ \ Email!   7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B


Re: strangest systems I've sent email from

2016-05-22 Thread Chris Hanson
On May 22, 2016, at 6:09 AM, Mouse  wrote:
> 
> I've often thought about building a C implementation that goes out of
> its way to break assumptions like "integers are two's complement with
> no padding bits" or "floats are IEEE" or "nil pointers are all-bits-0"
> or "all pointers are the same size and representation" or etc.

Just use Symbolics C on a Symbolics or ZetaC on a TI Explorer. (ZetaC was 
recently put in the Public Domain by its author, so you can take a look at how 
it was implemented.) Both of them I believe implemented C89.

There’s also someone writing a C to Common Lisp translator that takes some of 
the same approaches as those compilers:  
Something like that would make it possible to pull C code into a project like 
Mezzano.

(Another approach would be to write an LLVM back-end that generates Common Lisp 
using the same approach as ZetaC, and port clang to it.)

  -- Chris



Re: strangest systems I've sent email from

2016-05-22 Thread Maciej W. Rozycki
On Sun, 22 May 2016, Mouse wrote:

>    size_t foo = (size_t)-1;
> >>>size_t foo = -(size_t)1;
> >> size_t foo = (size_t)(-1);
> >  Why bother?  Won't:
> > size_t foo = ~0UL;
> > do (~0ULL for C99)?
> 
> Only if size_t is no larger than unsigned long int (unsigned long long
> int for the ULL version).  I don't think that's guaranteed.

 How can you have the type of `size_t' wider than the widest unsigned 
integer type in the respective revision of the language standard?

  Maciej


Re: strangest systems I've sent email from

2016-05-22 Thread Mouse
   size_t foo = (size_t)-1;
>>>size_t foo = -(size_t)1;
>> size_t foo = (size_t)(-1);
>  Why bother?  Won't:
>   size_t foo = ~0UL;
> do (~0ULL for C99)?

Only if size_t is no larger than unsigned long int (unsigned long long
int for the ULL version).  I don't think that's guaranteed.

Mouse


Re: strangest systems I've sent email from

2016-05-22 Thread Eric Christopherson
On Sun, May 22, 2016, Mouse wrote:
> >> Also, PostScript has a lot of language syntax, whereas FORTH has
> >> immediate words that act like language syntax.  (The difference is
> >> that FORTH makes it possible to change those words, thereby changing
> >> the apparent syntax.)
> > What do you mean by that?
> 
> Consider a simple definition
> 
> : foo swap - ;  ( inverted subtraction )
> /foo { exch sub } def  % inverted subtraction
> 
> (The first is FORTH[%], the second PostScript.)  Each of these has some
> "syntax" bits.  In FORTH, :, ;, (, and ).  In PostScript, the leading
> /, {, }, and %.

Interesting. I thought { } were just plain old words, but I'll at least
concede the rest.

> The difference is that in FORTH, you can create new immediate words
> and/or redefine the existing ones; : can do something other than
> beginning the definition of a word, and you can arrange to begin the
> definition of a word with something other than :.  In PostScript, none
> of this is mutable short of hacking on the underlying implementation
> (and if you do that the result isn't PostScript any longer).
> 
> [%] I think.  I don't really know FORTH; does it use - for subtraction?
> 
> /~\ The ASCII   Mouse
> \ / Ribbon Campaign
>  X  Against HTML  mo...@rodents-montreal.org
> / \ Email! 7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B

-- 
Eric Christopherson


Re: strangest systems I've sent email from

2016-05-22 Thread Maciej W. Rozycki
On Sun, 22 May 2016, Guy Dawson wrote:

> >> I'm not even sure
> >
> >>   size_t foo = (size_t)-1;
> >
> >> is legal,
> >
> > In C++, I don't know.  In C, I'm fairly sure it's legal.
> >
> >> or even does what I expect it to do (namely---set foo to the largest
> >> size_t value possible (pre C99).
> >
> > I'm not sure it does that.  If you want that, I think you want
> >
> >size_t foo = -(size_t)1;
> 
> While I think that
> 
> size_t foo = (size_t)(-1);
> 
> is what C would interpret as being meant. What the size of the thing that
> by default, in this implementation, -1 would be stored in.

 Why bother?  Won't:

size_t foo = ~0UL;

do (~0ULL for C99)?  Or is it just an example for the purpose of general 
consideration rather than a solution for a specific problem?

  Maciej


Re: C standards and committees (was Re: strangest systems I've sent email from)

2016-05-22 Thread Maciej W. Rozycki
On Sun, 22 May 2016, Mouse wrote:

> >>> First off, the C standard mandates that the order of fields in a
> >>> struct cannot be reordered,
> >> Yes.  (I think this is a Bad Thing, but I can see why they did it.)
> > Given that C is a systems implementation language, how would you
> > define HW related data structures where the order of the fields is
> > critical (ie HW defines them).
> 
> I can see three answers.
> 
> 1) Don't use structs for that.  Look at NetBSD's bus-space abstraction
>for one possible way.
> 
> 2) Make any reordering implementation-defined, so that code for
>specific implementations can know how the implementation does it.
> 
> 3) Make reordering optional.  Which way the default should go is
>arguable; since my guess is that most structs are not
>hardware-interface structs, the default should be reordering, with
>some keyword specifying no reordering.

4) While the C language standard may not mandate it itself, a specific 
   system ABI may require a particular bitfield, structure, etc. layout 
   which C compiler implementations for that platform need to adhere to, 
   and then you can rely on that.  Of course that does not solve the 
   problem for code which has to be portable across systems (e.g. option 
   card drivers), but there you usually need to take extra care for 
   differences between systems (e.g. endianness) anyway.

  Maciej


Re: C standards and committees (was Re: strangest systems I've sent email from)

2016-05-22 Thread Guy Sotomayor Jr

> On May 22, 2016, at 5:40 AM, Mouse  wrote:
> 
 First off, the C standard mandates that the order of fields in a
 struct cannot be reordered,
>>> Yes.  (I think this is a Bad Thing, but I can see why they did it.)
>> Given that C is a systems implementation language, how would you
>> define HW related data structures where the order of the fields is
>> critical (ie HW defines them).
> 
> I can see three answers.
> 
> 1) Don't use structs for that.  Look at NetBSD's bus-space abstraction
>   for one possible way.

I already have to do that for bit fields (ie do mask and shift).  The code
would look *much* cleaner/clearer if I could in fact use bit-fields.

Anything that is implementation defined results in them not being able
to be used.  For example, I have a peripheral that has structures that
define how to communicate to the peripheral.  If structs are “implemenation”
defined, then I can’t use the same structure across the interface because
I can’t guarantee that one side will interpret the structure the same as
the other.

This is *exactly* the issue with bit fields.  I want to use them (because it
makes sense) as part of an interface but I can’t because I can’t know
what the compilers will do on each side.
> 
> 2) Make any reordering implementation-defined, so that code for
>   specific implementations can know how the implementation does it.
> 
> 3) Make reordering optional.  Which way the default should go is
>   arguable; since my guess is that most structs are not
>   hardware-interface structs, the default should be reordering, with
>   some keyword specifying no reordering.


…and that sort of thing will cause all sorts of havoc in terms of breaking
software that currently works.  You’d have to go back and find/change
all of the places where reordering isn’t desired.  *If* it were to happen
I will bet dollars to donuts that the default would be no reordering and
there would be an attribute to allow for it.

…and what exactly is reordering supposed to buy you?

I would argue that for what you’re talking about is not “C” but some
language that may share some syntactic/semantic similarities with
“C” but is a clearly different language.

TTFN - Guy



Re: strangest systems I've sent email from

2016-05-22 Thread Lars Brinkhoff
Mouse  writes:
> I don't really know FORTH; does it use - for subtraction?

Generally, yes.


Re: strangest systems I've sent email from

2016-05-22 Thread Guy Dawson
>> I'm not even sure
>
>>   size_t foo = (size_t)-1;
>
>> is legal,
>
> In C++, I don't know.  In C, I'm fairly sure it's legal.
>
>> or even does what I expect it to do (namely---set foo to the largest
>> size_t value possible (pre C99).
>
> I'm not sure it does that.  If you want that, I think you want
>
>size_t foo = -(size_t)1;

While I think that

size_t foo = (size_t)(-1);

is what C would interpret as being meant. What the size of the thing that
by default, in this implementation, -1 would be stored in.



On 22 May 2016 at 14:09, Mouse  wrote:

> > I'd wager that most C code is written *assuming* 2's complement and 0
> > NULL pointers (and byte addressable, but I didn't ask for that 8-).
>
> Well, yes.  I'd go further: I'd wager most C code is written assuming
> everything works just the way the developer's system does; most people
> I interact with don't seem to realize that "try it and see how it
> works" is not a valid way to find out how C (as opposed to "C on this
> particular version of this particular implementation on this particular
> system") works.
>
> >   "[W]rite a VM with minimal bytecode and that uses 1s'
> >   complement and/or sign-magnitude.  Implement a GCC or LLVM
> >   backend for it if either of them has nominal support for that,
> >   or a complete C implementation if not.  [...]
> > Personally, I would *love* to see such a compiler (and would actually
> > use it, just to see how biased existing code is).
>
> I've often thought about building a C implementation that goes out of
> its way to break assumptions like "integers are two's complement with
> no padding bits" or "floats are IEEE" or "nil pointers are all-bits-0"
> or "all pointers are the same size and representation" or etc.
>
> > I'm not even sure
>
> >   size_t foo = (size_t)-1;
>
> > is legal,
>
> In C++, I don't know.  In C, I'm fairly sure it's legal.
>
> > or even does what I expect it to do (namely---set foo to the largest
> > size_t value possible (pre C99).
>
> I'm not sure it does that.  If you want that, I think you want
>
> size_t foo = -(size_t)1;
>
> > To me, I see 2's complement as having "won the war" so to speak.
>
> At present, yes, I agree.  I am not convinced that will not change.
>
> There was a time when "little endian, no alignment restrictions"
> appeared to have won the war (in the form of the VAX and, later, the
> x86).  These days, ARM's star is on the rise and others may follow,
> breaking the "no alignment restrictions" part (and possibly the "little
> endian" part - I forget whether ARM is big- or little-endian).
>
> >   http://blog.regehr.org/archives/[...]
>
> Yeah, I've read some of his writings.
>
> I disagree with him.  I think C needs to fork.  It seems to me there is
> need for two languages: an unsafe one, the "high level assembly
> language" C is regularly called, which is suitable for things such as
> hardware interfacing or malloc() implementation, and a safer one,
> closer to what blog.regehr.org seems to want, for...well, I'm not quite
> sure what he thinks this language would be better for.  Application
> layer stuff, I guess?
>
> /~\ The ASCII Mouse
> \ / Ribbon Campaign
>  X  Against HTMLmo...@rodents-montreal.org
> / \ Email!   7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B
>



-- 
4.4 > 5.4


Re: strangest systems I've sent email from

2016-05-22 Thread Mouse
>> Also, PostScript has a lot of language syntax, whereas FORTH has
>> immediate words that act like language syntax.  (The difference is
>> that FORTH makes it possible to change those words, thereby changing
>> the apparent syntax.)
> What do you mean by that?

Consider a simple definition

: foo swap - ;  ( inverted subtraction )
/foo { exch sub } def  % inverted subtraction

(The first is FORTH[%], the second PostScript.)  Each of these has some
"syntax" bits.  In FORTH, :, ;, (, and ).  In PostScript, the leading
/, {, }, and %.

The difference is that in FORTH, you can create new immediate words
and/or redefine the existing ones; : can do something other than
beginning the definition of a word, and you can arrange to begin the
definition of a word with something other than :.  In PostScript, none
of this is mutable short of hacking on the underlying implementation
(and if you do that the result isn't PostScript any longer).

[%] I think.  I don't really know FORTH; does it use - for subtraction?

/~\ The ASCII Mouse
\ / Ribbon Campaign
 X  Against HTMLmo...@rodents-montreal.org
/ \ Email!   7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B


Re: strangest systems I've sent email from

2016-05-22 Thread Mouse
> I'd wager that most C code is written *assuming* 2's complement and 0
> NULL pointers (and byte addressable, but I didn't ask for that 8-).

Well, yes.  I'd go further: I'd wager most C code is written assuming
everything works just the way the developer's system does; most people
I interact with don't seem to realize that "try it and see how it
works" is not a valid way to find out how C (as opposed to "C on this
particular version of this particular implementation on this particular
system") works.

>   "[W]rite a VM with minimal bytecode and that uses 1s'
>   complement and/or sign-magnitude.  Implement a GCC or LLVM
>   backend for it if either of them has nominal support for that,
>   or a complete C implementation if not.  [...]
> Personally, I would *love* to see such a compiler (and would actually
> use it, just to see how biased existing code is).

I've often thought about building a C implementation that goes out of
its way to break assumptions like "integers are two's complement with
no padding bits" or "floats are IEEE" or "nil pointers are all-bits-0"
or "all pointers are the same size and representation" or etc.

> I'm not even sure 

>   size_t foo = (size_t)-1;

> is legal,

In C++, I don't know.  In C, I'm fairly sure it's legal.

> or even does what I expect it to do (namely---set foo to the largest
> size_t value possible (pre C99).

I'm not sure it does that.  If you want that, I think you want

size_t foo = -(size_t)1;

> To me, I see 2's complement as having "won the war" so to speak.

At present, yes, I agree.  I am not convinced that will not change.

There was a time when "little endian, no alignment restrictions"
appeared to have won the war (in the form of the VAX and, later, the
x86).  These days, ARM's star is on the rise and others may follow,
breaking the "no alignment restrictions" part (and possibly the "little
endian" part - I forget whether ARM is big- or little-endian).

>   http://blog.regehr.org/archives/[...]

Yeah, I've read some of his writings.

I disagree with him.  I think C needs to fork.  It seems to me there is
need for two languages: an unsafe one, the "high level assembly
language" C is regularly called, which is suitable for things such as
hardware interfacing or malloc() implementation, and a safer one,
closer to what blog.regehr.org seems to want, for...well, I'm not quite
sure what he thinks this language would be better for.  Application
layer stuff, I guess?

/~\ The ASCII Mouse
\ / Ribbon Campaign
 X  Against HTMLmo...@rodents-montreal.org
/ \ Email!   7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B


Re: strangest systems I've sent email from

2016-05-21 Thread Guy Sotomayor Jr

> On May 21, 2016, at 7:34 PM, ben  wrote:
> 
> On 5/20/2016 2:58
> 
>> 
>> [4]  Say, a C compiler an 8088.  How big is a pointer?  How big of an
>>  object can you point to?  How much code is involved with "p++"?
> 
> How come INTEL thought that 64 KB segments ample? I guess they only used
> FLOATING point in the large time shared machines.

Because the 808x was a 16-bit processor with 1MB physical addressing.  I
would argue that for the time 808x was brilliant in that most other 16-bit
micros only allowed for 64KB physical.  If people wanted more they had to
add external hardware and the calling linkage became problematic (I know
because that’s what we did on the IBM S/23 Datamaster that used an 8085
and allowed for 192KB of ROM and 128KB of RAM).

Floating point was not common at the time in micros because of the number
of transistors/gates necessary for the implementation.  Intel added it as
a “coprocessor” in the 8087.  When I was at IBM we continually railed on
Intel to make floating point standard so that we could have code that
assumed floating point was always present. It finally happened with the
80486 but then Intel took it away again (sort-of) with the 486-SX which
was brilliant marketing by Intel…initially allowed them to sell “floor
swept” 486’s with non-functional floating point units…eventually their
process improved and more often than not 486-SX systems that had the 
floating point coprocessor actually had 2 fully functional 486 processors!

TTFN - Guy



Re: strangest systems I've sent email from

2016-05-21 Thread Sean Conner
It was thus said that the Great ben once stated:
> On 5/20/2016 2:58
> 
> >
> >[4]  Say, a C compiler an 8088.  How big is a pointer?  How big of an
> > object can you point to?  How much code is involved with "p++"?
> 
> How come INTEL thought that 64 KB segments ample? I guess they only used
> FLOATING point in the large time shared machines.

  The industry at the time was wanting larger CPUs than 8 bit.  Intel had an
existing 8-bit design, the 8080 and to fill demand, Intel had a few choices. 
It could break with any form of compatibility (object or source) and start
over with a clean slate [1].  Or they could keep some form of compatibility
and Intel went with (more or less) source compatibility.  You could
mechanically translate 8080 code into 8086 code with a high assurance it
would work, and thus customers of Intel could leverate the existing 8080
(and Z80) source base.

  And that's how you end up with a bizare segmented 16-bit architecture.

  -spc

[1] Motorola took this approach when making the 68000.  It's nothing at
all like the 6800.


Re: strangest systems I've sent email from

2016-05-21 Thread ben

On 5/20/2016 2:58



[4] Say, a C compiler an 8088.  How big is a pointer?  How big of an
object can you point to?  How much code is involved with "p++"?


How come INTEL thought that 64 KB segments ample? I guess they only used
FLOATING point in the large time shared machines.
Ben.


Re: strangest systems I've sent email from

2016-05-21 Thread ben

On 5/21/2016 6:42 AM, Liam Proven wrote:

On 21 May 2016 at 07:14, Sean Conner  wrote:


  Oh my!  I'm reading the manual for the C compiler for the Unisys 2200 [1]
system and it's dated 2013!  And yes, it does appear to be a 36-bit non-byte
addressable system.



And you can run the OS free of charge on high-end x86 kit:

http://www.theregister.co.uk/2016/03/31/free_x86_mainframes_for_all_virtual_mainframes_that_is/



Strange how 36 bit computers have never left us. The real computers go 
to the US Goverment, and Windows goes to the UNWASHED MASSES.

I wonder how things would have changed if the PDP 11 was 18 bits?
Ben.


Re: strangest systems I've sent email from

2016-05-21 Thread Sean Conner
It was thus said that the Great Mouse once stated:
> >>>   -spc (Wish the C standard committee had the balls to say "2's
> >>>   complement all the way, and a physical bit pattern of all 0s is a
> >>>   NULL pointer" ... )
> >> As far as I'm concerned, this is different only in degree from `Wish
> >> the C standard committee had the balls to say "Everything is x86".'.
> 
> > First off, can you supply a list of architectures that are NOT 2's
> > complement integer math that are still made and in active use today?
> 
> > Second, are there any architectures still commercially available and
> > used today where an all-zero bit pattern for an address *cannot* be
> > used as NULL?
> 
> What's the relevance?  You think the C spec should tie itself to the
> idiosyncracies of today's popular architectures?

  One more thing I forgot to mention:  Java integer ranges are 2's
complement, so it must assume 2's complement implementation.  I noticed that
Java is *also* available on the Unisys 2200, so either their implementation
of Java isn't quite kosher, or because the Unisys 2200 is emulated anyway,
they can "get by" with Java since the emulation of the Unisys is done on a
2's complement machine.

  -spc



Re: strangest systems I've sent email from

2016-05-21 Thread Sean Conner
It was thus said that the Great Mouse once stated:
> >>>   -spc (Wish the C standard committee had the balls to say "2's
> >>>   complement all the way, and a physical bit pattern of all 0s is a
> >>>   NULL pointer" ... )
> >> As far as I'm concerned, this is different only in degree from `Wish
> >> the C standard committee had the balls to say "Everything is x86".'.
> 
> > First off, can you supply a list of architectures that are NOT 2's
> > complement integer math that are still made and in active use today?
> 
> > Second, are there any architectures still commercially available and
> > used today where an all-zero bit pattern for an address *cannot* be
> > used as NULL?
> 
> What's the relevance?  You think the C spec should tie itself to the
> idiosyncracies of today's popular architectures?

  I'd wager that most C code is written *assuming* 2's complement and 0 NULL
pointers (and byte addressable, but I didn't ask for that 8-).  That the C
Standard is trying to cover sign-magnitude, 1's complement and 2's
complement leads to the darker, scarier, dangerous corners of C programming.

  I've been reading some interesting things on this.  

"Note that removing non-2's-complement from the standard would
completely ruin my stock response to all "what do you think of this
bit-twiddling extravaganza?" questions, which is to quickly confirm
that they don't work for 1s' complement negative numbers. As such
I'm either firmly against it or firmly in favour, but I'm not sure
which."

...

"[W]rite a VM with minimal bytecode and that uses 1s' complement
and/or sign-magnitude. Implement a GCC or LLVM backend for it if
either of them has nominal support for that, or a complete C
implementation if not. That both answers the question ("yes, I do
now know of a non-2's-complement implementation") and gives an
opportunity to file considered defect reports if the standard does
have oversights. If any of the defects is critical then it's
ammunition to mandate 2's complement in the next standard."


http://stackoverflow.com/questions/12276957/are-there-any-non-twos-complement-implementations-of-c?lq=1

  Personally, I would *love* to see such a compiler (and would actually use
it, just to see how biased existing code is).  From reading this
comp.lang.c++.moderated thread:


https://groups.google.com/forum/?hl=en=en#!topic/comp.lang.c++.moderated/gzwbsrZhix4

I'm not even sure 

size_t foo = (size_t)-1;

is legal, or even does what I expect it to do (namely---set foo to the
largest size_t value possible (pre C99).

  Now, I realize this is the Classic Computers Mailing list, which include
support for all those wnoderful odd-ball architectures of the past, but
really, in my research, I've found three sign-mangnitude based computers:

IBM 7090
Burroughs series A
PB 250

(the IBM 1620 was signed-magnitude, but decimal based, which the C standard
doesn't support.  And from what I understand, most sign-magnitude based
machines were decimal in nature, not binary, so they need tno apply)

and a slightly longer list of 1's complement machines:

Unisys 1100/2200
PDP-1  
LINC-8
PDP-12  (2's complement, but also included LINC-8 opcodes!)
CDC 160A
CDC 6000
Electrological EL-X1 and EL-X8

I won't bother with listing 2's complement architectures because the list
would be too long and not at all inclusive of all systems (but please, feel
free to add to the list of binary sign-magnitude and 1's complement
systems).  Of the 1's complement listed, only the Unisys is still in active
use with a non-trivial number of systems but not primarily emulated.

  To me, I see 2's complement as having "won the war" so to speak.  It is
far from "idiosyncratic." And any exotic architecture of tomorrow won't be
covered in the C standard becuase the C standard only covers three integer
math implementations:

signed magnitude
1's complement
2's complement

If tinary or qubit computers become popular enough to support C, the C
standard would have to be changed anyway.

  The initial C standard, C89, was a codification of *existing* practice,
and I'm sure IBM pressed to have the C standard support non-2's complement
so they could check off the "Standard C box."  Yes, Unisys has a C compiler
for the Unisys 2200 system and one that is fairly recent (2013).  But I
could not find out if it supported C99, much less C89.  I couldn't tell.  

  And yes, you can get a C compiler for a 6502.  They exist.  But the ones
I've seen aren't ANSI C (personally, I think one would be hard pressed to
*get* an ANSI C compiler for a 6502; it's a poor match) and thus, again,
aren't affected by what I'd like.

  I'm not even alone in this.  Again, for your reading pleasure:

Proposal for a Friendly Dialect of C

Re: strangest systems I've sent email from

2016-05-21 Thread Sean Caron

On Sat, 21 May 2016, Liam Proven wrote:


On 21 May 2016 at 07:14, Sean Conner  wrote:


  Oh my!  I'm reading the manual for the C compiler for the Unisys 2200 [1]
system and it's dated 2013!  And yes, it does appear to be a 36-bit non-byte
addressable system.



And you can run the OS free of charge on high-end x86 kit:

http://www.theregister.co.uk/2016/03/31/free_x86_mainframes_for_all_virtual_mainframes_that_is/

--
Liam Proven • Profile: http://lproven.livejournal.com/profile
Email: lpro...@cix.co.uk • GMail/G+/Twitter/Flickr/Facebook: lproven
MSN: lpro...@hotmail.com • Skype/AIM/Yahoo/LinkedIn: liamproven
Cell/Mobiles: +44 7939-087884 (UK) • +420 702 829 053 (ČR)



Thanks for posting that. I read the Reg but not on "the reg" so that one 
missed me. Downloading now! Sounds like fun ;)


Best,

Sean


Re: strangest systems I've sent email from

2016-05-21 Thread Liam Proven
On 21 May 2016 at 17:58, William Donzelli  wrote:
> Yes, I know - but so what? That is nothing new. The IBM 9370 line from
> 20-odd years ago was really an 801 inside, running S/370 in emulation.


I thought it was noteworthy considering that this subthread originated
in discussion of how all contemporary processors conformed to certain
norms.

This example of one that doesn't -- a 36-bit processor which doesn't
use 2's complement and so on -- today exists only as a software
emulation on an underlying architecture that /does/ conform and which
thus doesn't really resemble the architecture being emulated.

-- 
Liam Proven • Profile: http://lproven.livejournal.com/profile
Email: lpro...@cix.co.uk • GMail/G+/Twitter/Flickr/Facebook: lproven
MSN: lpro...@hotmail.com • Skype/AIM/Yahoo/LinkedIn: liamproven
Cell/Mobiles: +44 7939-087884 (UK) • +420 702 829 053 (ČR)


Re: C standards and committees (was Re: strangest systems I've sent email from)

2016-05-21 Thread Guy Sotomayor Jr

> On May 21, 2016, at 4:33 AM, Mouse  wrote:
> 
>>> Most executables are not performance-critical enough for
>>> dynamic-linker overhead to matter.  (For the few that are, or for
>>> the few cases where lots are, yes, static linking can help.)
>> I keep telling myself that whenever I launch Firefox after a reboot
>> ...
> 
> Do you have reason to think dynamic-linker overhead is a perceptible
> fraction of that delay?
> 
>>> [file formats and protocols]
>> First off, the C standard mandates that the order of fields in a
>> struct cannot be reordered,
> 
> Yes.  (I think this is a Bad Thing, but I can see why they did it.)

Given that C is a systems implementation language, how would you
define HW related data structures where the order of the fields is
critical (ie HW defines them).

> 
>> so that just leaves padding and byte order to deal with.
> 
> And data type size.  (To pick a simple example, if your bytes are
> nonets, you will have an interesting time generating an octet stream by
> overlaying a struct onto a buffer.)
> 
> And alignment.  Not all protocols and file formats place every datatype
> at naturally-aligned boundaries in the octet stream.

That’s why there are #pragmas and other compiler directives (i.e. “packed”)
to handle this.

> 
>> Now, it may sound cavalier of me, but of the three compilers I use at
>> work (gcc, clang, Solaris Sun Works thingy) I know how to get them to
>> layout the structs exactly as I need them
> 
> Great.  ...for code that doesn't mind writing off portability to other,
> including future, hardware and compilers.
> 
> I still don't see why you're citing "it works for my work environment"
> as justification for "the C standard should write off anything else”.

My biggest complaint about the C standard is that the order that bits
within a bit field are compiler defined.  This basically means that they
are completely unusable for anything that requires interoperability.

TTFN - Guy



Re: strangest systems I've sent email from

2016-05-21 Thread Liam Proven
On 21 May 2016 at 17:33, William Donzelli  wrote:
> Unisys is still making new machines as well.


Yes it is, but they are x86 boxes running an emulator.

http://www.theregister.co.uk/2015/05/26/unisys_finally_weans_itself_off_cmos_chippery/

AFAIK only IBM is still making actual hardware mainframe processors.
The handful of other remaining vendors are all using software
emulation on generic x86 hardware. High-end hardware, sure, yes.

-- 
Liam Proven • Profile: http://lproven.livejournal.com/profile
Email: lpro...@cix.co.uk • GMail/G+/Twitter/Flickr/Facebook: lproven
MSN: lpro...@hotmail.com • Skype/AIM/Yahoo/LinkedIn: liamproven
Cell/Mobiles: +44 7939-087884 (UK) • +420 702 829 053 (ČR)


Re: strangest systems I've sent email from

2016-05-21 Thread William Donzelli
>   Wow!

Unisys is still making new machines as well.

--
Will


Re: C standards and committees (was Re: strangest systems I've sent email from)

2016-05-21 Thread Mouse
>> Most executables are not performance-critical enough for
>> dynamic-linker overhead to matter.  (For the few that are, or for
>> the few cases where lots are, yes, static linking can help.)
> I keep telling myself that whenever I launch Firefox after a reboot
> ...

Do you have reason to think dynamic-linker overhead is a perceptible
fraction of that delay?

>> [file formats and protocols]
> First off, the C standard mandates that the order of fields in a
> struct cannot be reordered,

Yes.  (I think this is a Bad Thing, but I can see why they did it.)

> so that just leaves padding and byte order to deal with.

And data type size.  (To pick a simple example, if your bytes are
nonets, you will have an interesting time generating an octet stream by
overlaying a struct onto a buffer.)

And alignment.  Not all protocols and file formats place every datatype
at naturally-aligned boundaries in the octet stream.

> Now, it may sound cavalier of me, but of the three compilers I use at
> work (gcc, clang, Solaris Sun Works thingy) I know how to get them to
> layout the structs exactly as I need them

Great.  ...for code that doesn't mind writing off portability to other,
including future, hardware and compilers.

I still don't see why you're citing "it works for my work environment"
as justification for "the C standard should write off anything else".

/~\ The ASCII Mouse
\ / Ribbon Campaign
 X  Against HTMLmo...@rodents-montreal.org
/ \ Email!   7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B


Re: strangest systems I've sent email from

2016-05-21 Thread Mouse
>>>   -spc (Wish the C standard committee had the balls to say "2's
>>>   complement all the way, and a physical bit pattern of all 0s is a
>>>   NULL pointer" ... )
>> As far as I'm concerned, this is different only in degree from `Wish
>> the C standard committee had the balls to say "Everything is x86".'.

> First off, can you supply a list of architectures that are NOT 2's
> complement integer math that are still made and in active use today?

> Second, are there any architectures still commercially available and
> used today where an all-zero bit pattern for an address *cannot* be
> used as NULL?

What's the relevance?  You think the C spec should tie itself to the
idiosyncracies of today's popular architectures?

> [3]   I only bring this up because you seem to be assuming my
>   position is "all the world's on x86"

No, I don't think that's your position.  I'm using that as a satirical
exaggeration of your position.  If I'd been writing this twenty years
ago, I would have written "VAX" instead, because that was the machine
widely assumed at the time.

>   And because of this, I checked some of your C code and I
>   noticed you used 0 and 1 as exit codes, which, pedantically
>   speaking, isn't portable.

%SYSTEM-W-NORMAL, normal successful completion.

My code makes no pretense to portability to all dialects of C.  (Well,
most of it; there might be a little that is supposed to be that
portable, but I can't think of anything offhand.)

Besides exit codes, I assume ints are relatively large (a significant
fraction of my code will explode badly on <32-bit ints) and that the
underlying system is basically Unix.  Some of it should work on
anything POSIX.  Relatively little of it will work on non-POSIX C
implementations.  Some of it even calls for NetBSD with my patches
applied (eg, anything depending on AF_TIMER sockets).

>   Yes, I'll admit this might be a low blow here ...

Perhaps.  But I don't see it as relevant.  It's a long way from "much
of my code is restricted to $CLASS_OF_ENVIRONMENTS" to "I think the C
standard should write off anything outside $CLASS_OF_ENVIRONMENTS".

/~\ The ASCII Mouse
\ / Ribbon Campaign
 X  Against HTMLmo...@rodents-montreal.org
/ \ Email!   7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B


Re: strangest systems I've sent email from

2016-05-21 Thread Lars Brinkhoff
Sean Conner  writes:
> It was thus said that the Great William Donzelli once stated:
>> There are probably a couple hundred Unisys 2200 systems left in the
>> world
> I suppose this is 1's complement but I see nothing about that in the
> manual

Aka UNISYS 1100/2200, aka ClearPath ix, aka ClearPath OS 2200.

https://en.wikipedia.org/wiki/UNIVAC_1100/2200_series


Re: C standards and committees (was Re: strangest systems I've sent email from)

2016-05-20 Thread Chuck Guzis
Oh, foo--in 40 years, "C" will be just a quaint recollection in the
minds of the old-timers.  Like JOVIAL.

--Chuck


Re: strangest systems I've sent email from

2016-05-20 Thread Sean Conner
It was thus said that the Great William Donzelli once stated:
> >   First off, can you supply a list of architectures that are NOT 2's
> > complement integer math that are still made and in active use today?  As far
> > as I can tell, there was only one signed-magnitude architecture ever
> > commercially available (and that in the early 60s) and only a few 1's
> > complement architectures from the 60s (possibly up to the early 70s) that
> > *might* still be in active use.
> 
> There are probably a couple hundred Unisys 2200 systems left in the
> world (no one really knows the true number). Of course, when the C
> standards were being drawn up, there were many more, with a small but
> significant share of the mainframe market.

  Oh my!  I'm reading the manual for the C compiler for the Unisys 2200 [1]
system and it's dated 2013!  And yes, it does appear to be a 36-bit non-byte
addressable system.  

  Wow!

  I am finding chapter 14 ("Strategies for Writing Efficient Code") amusing
("don't use pointers!" "don't use loops!")  I suppose this is 1's complement
but I see nothing about that in the manual, nor do I see any system limits
(like INT_MAX, CHAR_MAX, etc).  

  -spc (Color me surprised!)

[1] https://public.support.unisys.com/2200/docs/cp15.0/pdf/78310430-016.pdf


Re: C standards and committees (was Re: strangest systems I've sent email from)

2016-05-20 Thread Sean Conner
It was thus said that the Great Mouse once stated:
> 
> > 3) It's slower.  Two reasons for this:
> 
> Even to the extent this is true, in most cases, "so what"?
> 
> Most executables are not performance-critical enough for dynamic-linker
> overhead to matter.  (For the few that are, or for the few cases where
> lots are, yes, static linking can help.)

  I keep telling myself that whenever I launch Firefox after a reboot ...

> > I use the uintXX_t types for interoperability---known file formats
> > and network protocols, and the plain (or known types like size_t)
> > otherwise.
> 
> uintXX_t does not help much with "known file formats and network
> protocols".  You have to either still serialize and deserialize
> manually - or blindly hope your compiler adds no padding bits (eg, that
> it lays out your structs exactly the way you hope it will).

  First off, the C standard mandates that the order of fields in a struct
cannot be reordered, so that just leaves padding and byte order to deal
with.  Now, it may sound cavalier of me, but of the three compilers I use at
work (gcc, clang, Solaris Sun Works thingy) I know how to get them to layout
the structs exactly as I need them (and it doesn't hurt that for the files
and protocols we deal with are generally properly aligned anyway for those
systems that can't deal with misaligned reads (generally everything *BUT*
the x86)) and that we keep everything in network byte order. [1]

  -spc 

[1] Sorry Rob Pike [2], but compilers aren't quite smart enough [3]
yet.

[2] https://commandcenter.blogspot.com/2012/04/byte-order-fallacy.html

[3] https://news.ycombinator.com/item?id=3796432


Re: strangest systems I've sent email from

2016-05-20 Thread William Donzelli
>   First off, can you supply a list of architectures that are NOT 2's
> complement integer math that are still made and in active use today?  As far
> as I can tell, there was only one signed-magnitude architecture ever
> commercially available (and that in the early 60s) and only a few 1's
> complement architectures from the 60s (possibly up to the early 70s) that
> *might* still be in active use.

There are probably a couple hundred Unisys 2200 systems left in the
world (no one really knows the true number). Of course, when the C
standards were being drawn up, there were many more, with a small but
significant share of the mainframe market.

--
Will (C sucks)


Re: strangest systems I've sent email from

2016-05-20 Thread Sean Conner
It was thus said that the Great Mouse once stated:
> >   -spc (Wish the C standard committee had the balls to say "2's
> >   complement all the way, and a physical bit pattern of all 0s is a
> >   NULL pointer" ... )
> 
> As far as I'm concerned, this is different only in degree from `Wish
> the C standard committee had the balls to say "Everything is x86".'.

  First off, can you supply a list of architectures that are NOT 2's
complement integer math that are still made and in active use today?  As far
as I can tell, there was only one signed-magnitude architecture ever
commercially available (and that in the early 60s) and only a few 1's
complement architectures from the 60s (possibly up to the early 70s) that
*might* still be in active use.

  Second, are there any architectures still commercially available and used
today where an all-zero bit pattern for an address *cannot* be used as NULL? 
Because it comes across as strange where:

char *p = (char *)0;

  is legal (that is, p will be assigned the NULL address for that platform
which may not be all-zero bits) whereas:

char *p;
memset(,0,sizeof(p));

isn't (p is not guarenteed to be a proper NULL pointer) [1].

  My wish covers yes, the x86, but also covers the 68k, MIPS, SPARC,
PowerPC and ARM.  Are there any others I missed?

  -spc (Also, the only portable exit codes are EXIT_SUCCESS and EXIT_FAILRE
[2][3])

[1] I know *why* this happens, but try to explain this to someone who
hasn't had *any* exposure to a non-byte, non-2's complement,
non-8-32-64bit system.

[2] From my understanding, it's DEC that mandated these instead of 0 and
1, because VMS used a different interpretation of exit codes than
everybody else ...

[3] I only bring this up because you seem to be assuming my position is
"all the world's on x86" when it's not (the world is "2's complement
byte oriented machines").  And because of this, I checked some of
your C code and I noticed you used 0 and 1 as exit codes, which,
pedantically speaking, isn't portable.

Yes, I'll admit this might be a low blow here ...


Re: C standards and committees (was Re: strangest systems I've sent email from)

2016-05-20 Thread Mouse
>   I'm generally not a fan of shared libraries as:

>   1) Unless you are linking against a library like libc or
>   libc++, a lot of memory will be wasted because the *entire*
>   library is loaded up, unlike linking to a static library where
>   only those functions actually used are linked into the final
>   executable

Yes and no.  (Caveat: the following discussion assumes a number of
things which are true in currently-common implementations; in cases
where they are false, the balance may well tip the other way.)

First, except for pages touched by dynamic linking (which should be
very few if the library was built properly), the library is shared
among all copies of it in use.  In a statically linked program, this is
true only across a single executable, with shared objects, it is true
across all executables linked with that library.

Second, even if the executable is the only one using the library, most
of it is demand-paged, occupying no physical memory unless/until it's
referenced.  (It still occupies virtual memory; if virtual memory is
tight - eg, you're doing gigabyte-sized datasets on a 32-bit machine -
then this can matter.)

>   2) because of 1, you have a huge surface area exposed that can
>   be exploited.

True.

Also, it exposes more attack surface in another way: it means there are
two files, not one, the modification of which can subvert the
application.  (The executable itself and the shared object.)  Add one
more for each additional shared object used.

>   3) It's slower.  Two reasons for this:

Even to the extent this is true, in most cases, "so what"?

Most executables are not performance-critical enough for dynamic-linker
overhead to matter.  (For the few that are, or for the few cases where
lots are, yes, static linking can help.)

> I use the uintXX_t types for interoperability---known file formats
> and network protocols, and the plain (or known types like size_t)
> otherwise.

uintXX_t does not help much with "known file formats and network
protocols".  You have to either still serialize and deserialize
manually - or blindly hope your compiler adds no padding bits (eg, that
it lays out your structs exactly the way you hope it will).

/~\ The ASCII Mouse
\ / Ribbon Campaign
 X  Against HTMLmo...@rodents-montreal.org
/ \ Email!   7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B


Re: strangest systems I've sent email from

2016-05-20 Thread Mouse
>   -spc (Wish the C standard committee had the balls to say "2's
>   complement all the way, and a physical bit pattern of all 0s is a
>   NULL pointer" ... )

As far as I'm concerned, this is different only in degree from `Wish
the C standard committee had the balls to say "Everything is x86".'.

> [3]   Often to disasterous results.  An agressive C optimizer can
>   optimize the following right out:

>   if (x + 1 < x ) { ... }

>   Because "x+1" can *never* be less than "x" (signed overflow?
>   What's that?)

More precisely, because signed overflow invokes undefined behaviour,
meaning that, for values of x where x+1 overflows, the program can
behave in any way whatever and still be within spec.  Including, say,
skipping the stuff inside the { } block.

/~\ The ASCII Mouse
\ / Ribbon Campaign
 X  Against HTMLmo...@rodents-montreal.org
/ \ Email!   7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B


Re: C standards and committees (was Re: strangest systems I've sent email from)

2016-05-20 Thread Sean Conner
It was thus said that the Great Swift Griggs once stated:
> On Fri, 20 May 2016, Sean Conner wrote:
> > By the late 80s, C was available on many different systems and was not 
> > yet standardized.
> 
> There were lots of standards, but folks typically gravitated toward K or 
> ANSI at the time. Though I was a pre-teen, I was a coder at the time.  
> Those are pretty raw and primitive compared to C99 or C11, but still quite 
> helpful, for me at least. Most of the other "standards" were pretty much a 
> velvet glove around vendor-based "standards", IMHO.

  In 1988, C had yet to be standardized.

  In 1989, ANSI released the first C standard, commonly called ANSI C or
C89.  

  I stared C programming in 1990, so I started out with ANSI C pretty much
from the start.  I found I prefer ANSI-C over K (pre-ANSI C), because the
compiler can catch more errors.

> > The standards committee was convened in an attempt to make sense of all 
> > the various C implementations and bring some form of sanity to the 
> > market.
> 
> I'm pretty negative on committees, in general. However, ISO and ANSI 
> standards have worked pretty well, so I suppose they aren't totally 
> useless _all_ the time.
> 
> Remember OSI networking protocols? They had a big nasty committee for all 
> their efforts, and we can see how that worked out. We got the "OSI model" 
> (which basically just apes other models already well established at the 
> time). That's about it (oh sure, a few other things like X.500 inspired 
> protocols but I think X.500 is garbage *shrug* YMMV). Things like TPx 
> protocols never caught on. Some would say it was because the world is so 
> unenlightened it couldn't recognize the genius of the commisar^H^H^H 
> committee's collective creations. I have a somewhat different viewpoint.

  The difference between the two?  ANSI codified existing examples where as
ISO created a standard in a vacuum and expected people to write
implementaitons to the standard.

> > All those "undefined" and "implementation" bits of C?  Yeah, competing 
> > implementations.
> 
> Hehe, what is a long long? Yes, you are totally right. Still, I assert 
> that C is still the defacto most portable language on Earth. What other 
> language runs on as many OS's and CPUs ? None that I can think of.

  A long long is at least 64-bits long.

  And Lua can run on as many OSs and CPUs as C.  

> > And because of the bizarre systems C can potentially run on, pointer 
> > arithmetic is ... odd as well [4].
> 
> Yeah, it's kind of an extension of the same issue, too many undefined grey 
> areas. In practice, I don't run into these types of issues much. However, 
> to be fair, I typically code on only about 3 different platforms, and they 
> are pretty similar and "modern" (BSD, Linux, IRIX).

  Just be thankful you never had to program C in the 80s and early 90s:

http://www.digitalmars.com/ctg/ctgMemoryModel.html

  Oh, wait a second ...


http://eli.thegreenplace.net/2012/01/03/understanding-the-x64-code-models

> > It also doesn't help that bounds checking arrays is a manual process, 
> > but then again, it would be a manual process on most CPUs [5] anyway ...
> 
> I'm in the "please don't do squat for me that I don't ask for" camp. 

  What's wrong with the following code?

p = malloc(sizeof(somestruct) * count_of_items);

  Spot the bug yet?  

  Here's the answer:  it can overflow.  But that's okay, because sizeof()
returns an unsiged quantity, and count_of_items *should* be an unsigned
quantity (both size_t) and overflow on unsigned quantities *is* defined to
wrap (it's signed quantities that are undefined).  But that's *still* a
problem because if "sizeof(somestruvt) * count_of_items" exceeds the size of
a size_t, then the result is *smaller* than expected and you get a valid
pointer back, but to smaller pool of memory that expected.

  This may not be an issue on 64-bit systems (yet), but it can be on a
32-bit system.  Correct system code (in C99) would be:

if (count_of_items > (SIZE_MAX / sizeof(somestruct)))
  error();
p = malloc(sizeof(somestruct) * count_of_items);

  Oooh ... that reminds me ... I have some code to check ... 

> I know that horrifies and disgusts some folks who want GC and auto-bounds
> checking everywhere they can cram it in. Would SSA form aid with all kinds
> of fancy compiler optimizations, including some magic bounds checking? 
> Sure. However, perhaps because I'm typical of an ignorant C coder, I would
> expect the cost of any such feature would be unacceptable to some. 

  Don't discount GC though---it simplifies a lot of code


> Also,
> there are plenty of C variants or special compilers that can do such
> things. Also, there are a few things that can be run with LD_PRELOAD which
> can help to find issues where someone has forgot to do proper bounds
> checking. 

  I'm generally not a fan of shared libraries as:

1) Unless you are linking against a library 

C standards and committees (was Re: strangest systems I've sent email from)

2016-05-20 Thread Swift Griggs
On Fri, 20 May 2016, Sean Conner wrote:
> By the late 80s, C was available on many different systems and was not 
> yet standardized.

There were lots of standards, but folks typically gravitated toward K or 
ANSI at the time. Though I was a pre-teen, I was a coder at the time.  
Those are pretty raw and primitive compared to C99 or C11, but still quite 
helpful, for me at least. Most of the other "standards" were pretty much a 
velvet glove around vendor-based "standards", IMHO.

> The standards committee was convened in an attempt to make sense of all 
> the various C implementations and bring some form of sanity to the 
> market.

I'm pretty negative on committees, in general. However, ISO and ANSI 
standards have worked pretty well, so I suppose they aren't totally 
useless _all_ the time.

Remember OSI networking protocols? They had a big nasty committee for all 
their efforts, and we can see how that worked out. We got the "OSI model" 
(which basically just apes other models already well established at the 
time). That's about it (oh sure, a few other things like X.500 inspired 
protocols but I think X.500 is garbage *shrug* YMMV). Things like TPx 
protocols never caught on. Some would say it was because the world is so 
unenlightened it couldn't recognize the genius of the commisar^H^H^H 
committee's collective creations. I have a somewhat different viewpoint.

> All those "undefined" and "implementation" bits of C?  Yeah, competing 
> implementations.

Hehe, what is a long long? Yes, you are totally right. Still, I assert 
that C is still the defacto most portable language on Earth. What other 
language runs on as many OS's and CPUs ? None that I can think of.

> And because of the bizarre systems C can potentially run on, pointer 
> arithmetic is ... odd as well [4].

Yeah, it's kind of an extension of the same issue, too many undefined grey 
areas. In practice, I don't run into these types of issues much. However, 
to be fair, I typically code on only about 3 different platforms, and they 
are pretty similar and "modern" (BSD, Linux, IRIX).

> It also doesn't help that bounds checking arrays is a manual process, 
> but then again, it would be a manual process on most CPUs [5] anyway ...

I'm in the "please don't do squat for me that I don't ask for" camp. I 
know that horrifies and disgusts some folks who want GC and auto-bounds 
checking everywhere they can cram it in. Would SSA form aid with all kinds 
of fancy compiler optimizations, including some magic bounds checking? 
Sure. However, perhaps because I'm typical of an ignorant C coder, I would 
expect the cost of any such feature would be unacceptable to some. Also, 
there are plenty of C variants or special compilers that can do such 
things. Also, there are a few things that can be run with LD_PRELOAD which 
can help to find issues where someone has forgot to do proper bounds 
checking. Once people get on the "C sucks, put on some baby gates" bus, 
most of the actual working C coders get off at the next stop. Not judging, 
just observing. I've seen this over and over and over and ... 

>   -spc (Wish the C standard committee had the balls to say "2's complement
>   all the way, and a physical bit pattern of all 0s is a NULL pointer"
>   ... )

I'm right there with you on this one! 

> Because "x+1" can *never* be less than "x" (signed overflow?  What's 
> that?)

Hmm, well, people (me included in days gone by) tend to abuse signed 
scalars to simply get bigger integers. I really wish folks would embrace 
uintXX_t style ints ... problem solved, IMHO. It's right there for them in 
C99 to use.

> Except for, say, the Intel 432.  Automatic bounds checking on that one.

You can't always rely on the hardware, but perhaps that's your point.

> Trapping on signed overflow is still contentious today.  While some 
> systems can trap immediate on overflow (VAX, MIPS), the popular CPUs 
> today can't.

Hmmm, well I'm guessing most compilers on those platforms would support 
that (and hey, great!). 

> It's not to say they can't test for it, but that's the problem---they 
> have to test after each possible operation.

That's almost always the case when folks want rubber bumpers on C. That's 
really emblematic of my issues with that seemly instinctual reaction some 
folks have to C.

-Swift



Re: strangest systems I've sent email from

2016-05-20 Thread Sean Conner
It was thus said that the Great Liam Proven once stated:
> On 29 April 2016 at 19:49, Mouse  wrote:
> > 
> > It's true that C is easy to use unsafely.  However, (a) it arose as an
> > OS implementation language, for which some level of unsafeness is
> > necessary, and (b) to paraphrase a famous remark about Unix, I suspect
> > it is not possible to eliminate the ability to do stupid things in C
> > without also eliminating the ability to do some clever things in C.
> 
> I think that the key thing is not to offer people alternatives that
> make it safer at the cost of removal of the clever stuff. It's to
> offer other clever stuff instead. C is famously unreadable, and yet
> most modern languages ape its syntax.

  By the late 80s, C was available on many different systems and was not yet
standardized.  The standards committee was convened in an attempt to make
sense of all the various C implementations and bring some form of sanity to
the market.  All those "undefined" and "implementation" bits of C?  Yeah,
competing implementations.

  For instance, why is signed integer arithmetic so underspecified?  So that
it could run on everything from a signed-magnitude machine [1] to a
pi-complement machine (with e-states per bit [2]).  Also, to give C
implementors a way to optimize code [3][6].

  And because of the bizarre systems C can potentially run on, pointer
arithmetic is ... odd as well [4].

  It also doesn't help that bounds checking arrays is a manual process, but
then again, it would be a manual process on most CPUs [5] anyway ... 

  -spc (Wish the C standard committee had the balls to say "2's complement
all the way, and a physical bit pattern of all 0s is a NULL pointer"
... )

[1] I think there was only one signed-magnitude CPU commercially
available, ever!  From the early 60s!  Seriously doubt C was ever
ported to that machine, but hey!  It could be!

[2] 2.71828 ... 

[3] Often to disasterous results.  An agressive C optimizer can optimize
the following right out:

if (x + 1 < x ) { ... }

Because "x+1" can *never* be less than "x" (signed overflow?  What's
that?)

[4] Say, a C compiler an 8088.  How big is a pointer?  How big of an
object can you point to?  How much code is involved with "p++"?

[5] Except for, say, the Intel 432.  Automatic bounds checking on that
one.  

[6] Trapping on signed overflow is still contentious today.  While some
systems can trap immediate on overflow (VAX, MIPS), the popular CPUs
today can't.  It's not to say they can't test for it, but that's the
problem---they have to test after each possible operation.  And not
all instructions that manipulate values affects the overflow bit
(it's not uncommon on the x86, for instance, to use an LEA
instruction (which does not affect flags) to multiple a value by
small constants (like 17 say) which is faster than a full multiple
and probably uses fewer registers and prevents clogging the
piplelines).


Test software on machines customers actually use - was Re: strangest systems I've sent email from

2016-05-20 Thread Toby Thain

On 2016-05-20 3:03 PM, Fred Cisin wrote:

I don't know if that was a specific market ploy based on Moore's Law,


an actually quite smart move, . . .


or just the generally accepted practice of getting an initial version
with the API working any which way, then refactoring to improve
performance/correctness in later versions.


For decades, I used to rant that the biggest problem with Microsoft
software was that they treated their programmers "too well".

That if Microsoft programmer had space problems, they would immediately
replace his machine with one with more RAM and bigger drive, and he
wouldn't learn to be memory or disk space efficient.

That if his programs were too slow, that they would immediately replace
his machine with a faster one, and he would never learn to write fast or
efficient code.


This is still a huge problem that afflicts web development as much as it 
did desktop development.


Devs should have down-specced machines (say, 5+ years old) or at the 
very least, should be regularly testing on them.


--Toby



If there was ever a hardware problem, they would immediately replace the
machine.  Accordingly, Microsoft programmers NEVER actually experienced
hardware issues, and had to IMAGINE what disk errors, etc. would be
like, resulting in software that couldn't properly handle them when they
happened. ...

OK, so development should be targeted for next generation hardware.
BUT, testing should be done with what is actually out in the real world.

--
Grumpy Ol' Fred ci...@xenosoft.com





Re: strangest systems I've sent email from

2016-05-20 Thread Austin Pass
On 20 May 2016, at 20:03, Fred Cisin  wrote:

>> I don't know if that was a specific market ploy based on Moore's Law,
> 
> an actually quite smart move, . . .
> 
>> or just the generally accepted practice of getting an initial version
>> with the API working any which way, then refactoring to improve
>> performance/correctness in later versions.
> 
> For decades, I used to rant that the biggest problem with Microsoft software 
> was that they treated their programmers "too well".
> 
> That if Microsoft programmer had space problems, they would immediately 
> replace his machine with one with more RAM and bigger drive, and he wouldn't 
> learn to be memory or disk space efficient.
> 
> That if his programs were too slow, that they would immediately replace his 
> machine with a faster one, and he would never learn to write fast or 
> efficient code.
> 
> If there was ever a hardware problem, they would immediately replace the 
> machine.  Accordingly, Microsoft programmers NEVER actually experienced 
> hardware issues, and had to IMAGINE what disk errors, etc. would be like, 
> resulting in software that couldn't properly handle them when they happened.  
> For exaample, when SMARTDRV was causing MANY problems with write-caching 
> (TOTAL failure and data loss if even a minor disk error occurs), Microsoft 
> was in denial, and couldn't understand that their software needed to be able 
> to recover, or at least sanely handle the situation when an error occurred.
> They did not CARE ("well, that's a hardware problem, not out problem.") that 
> a single bad sector (unfound by SPINRITE) in the disk space occupied by the 
> WINGBATS font, totally prevented installation of Windoze 3.10.
> [cf. "disk compression problems" due to SMARTDRV, and their need to replace 
> DOS6.00 with 6.20]
> 
> 
> I used to rant that if Microsoft were to "trade machines with us", and give 
> their programmers current or old, rather than newest, machines, that their 
> programmers might finally learn how to write robust compact fast software.

The rumour was that Bill Gates insisted programmers used 386's when writing 
Windows '95, although I'm struggling to find a single shred of evidence 
supporting this statement, so it may be mis-remembered fantasy.

-Austin.

Re: strangest systems I've sent email from

2016-05-20 Thread Fred Cisin

I don't know if that was a specific market ploy based on Moore's Law,


an actually quite smart move, . . .


or just the generally accepted practice of getting an initial version
with the API working any which way, then refactoring to improve
performance/correctness in later versions.


For decades, I used to rant that the biggest problem with Microsoft 
software was that they treated their programmers "too well".


That if Microsoft programmer had space problems, they would immediately 
replace his machine with one with more RAM and bigger drive, and he 
wouldn't learn to be memory or disk space efficient.


That if his programs were too slow, that they would immediately replace 
his machine with a faster one, and he would never learn to write fast or 
efficient code.


If there was ever a hardware problem, they would immediately replace the 
machine.  Accordingly, Microsoft programmers NEVER actually experienced 
hardware issues, and had to IMAGINE what disk errors, etc. would be like, 
resulting in software that couldn't properly handle them when they 
happened.  For exaample, when SMARTDRV was causing MANY problems with 
write-caching (TOTAL failure and data loss if even a minor disk error 
occurs), Microsoft was in denial, and couldn't understand that their 
software needed to be able to recover, or at least sanely handle the 
situation when an error occurred.
They did not CARE ("well, that's a hardware problem, not out problem.") 
that a single bad sector (unfound by SPINRITE) in the disk space occupied 
by the WINGBATS font, totally prevented installation of Windoze 3.10.
[cf. "disk compression problems" due to SMARTDRV, and their need to 
replace DOS6.00 with 6.20]



I used to rant that if Microsoft were to "trade machines with us", and 
give their programmers current or old, rather than newest, machines, that 
their programmers might finally learn how to write robust compact fast 
software.



OK, so development should be targeted for next generation hardware.
BUT, testing should be done with what is actually out in the real world.

--
Grumpy Ol' Fred ci...@xenosoft.com


Re: strangest systems I've sent email from

2016-05-20 Thread David Brownlee
On 20 May 2016 at 17:24, Liam Proven  wrote:
>
> On 18 May 2016 at 21:40, Fred Cisin  wrote:
> > But, "Moore's Law" held that it wouldn't be much longer.
> > Just one doubling of the speed of the Lisa's hardware would have been enough
> > to silence the speed complaints.
>
> A general point, really.
>
> One of Microsoft's strokes of brilliance was selectively exploiting
> this. I think maybe it learned it from the 80286 OS/2 1.x débacle.
>
> NT 3.1 was brilliant if a bit bulky and unoptimised. Fair enough, it
> was a v1.0 OS. It was way way WAY too heavy for the average 1993 PC,
> but power users played, partly 'cos it fixed serious problems with
> Windows 3.1.
>
> [[make it work, ship version, then make it fast, then ship new version]]

I don't know if that was a specific market ploy based on Moore's Law,
or just the generally accepted practice of getting an initial version
with the API working any which way, then refactoring to improve
performance/correctness in later versions.

I seem to recall a comment that when select() was introduced in BSD4.2
it was implemented by an in kernel loop of polling each filedescriptor
so they could ship something with the API they wanted, and then in 4.3
the implementation was refactored to... how should we say... be much
more performant :)


Re: strangest systems I've sent email from

2016-05-20 Thread Liam Proven
On 18 May 2016 at 21:40, Fred Cisin  wrote:
> But, "Moore's Law" held that it wouldn't be much longer.
> Just one doubling of the speed of the Lisa's hardware would have been enough
> to silence the speed complaints.


A general point, really.

One of Microsoft's strokes of brilliance was selectively exploiting
this. I think maybe it learned it from the 80286 OS/2 1.x débacle.

NT 3.1 was brilliant if a bit bulky and unoptimised. Fair enough, it
was a v1.0 OS. It was way way WAY too heavy for the average 1993 PC,
but power users played, partly 'cos it fixed serious problems with
Windows 3.1.

(You could run a Win3.1 16-bit app in its own memory space & thus
slightly get round Win3.1's terrible low resource limitations. Source:
my customers did it, and paid GBP 5K for a PC to run it on for that
reason.)

NT 3.5 fixed some of that and now the PC was £3.5K or so.

NT 3.51 was pretty good and now the PC was £2.5-£2K -- in other words,
accessible to a high-end power user. The Win3 UI kept the proles away
-- they wanted the friendlier Win95.

NT 4 brought the UI, and now, a plain vanilla high-end PC could run it.

The cycle sort of repeated with XP and Vista -- they were aimed a bit
above the vanilla cheapo turn-of-the-century PC and its successor. The
market caught up as they matured.

Selectively aiming a bit ahead of where the ordinary PC was allowed MS
to refine the OSes in public, so they were ready for prime-time by the
time that the market had caught up.

IBM, OTOH, aimed at the thousands of boxes /it had already sold/ and
so totally torpedoed its own product.


-- 
Liam Proven • Profile: http://lproven.livejournal.com/profile
Email: lpro...@cix.co.uk • GMail/G+/Twitter/Flickr/Facebook: lproven
MSN: lpro...@hotmail.com • Skype/AIM/Yahoo/LinkedIn: liamproven
Cell/Mobiles: +44 7939-087884 (UK) • +420 702 829 053 (ČR)


Re: strangest systems I've sent email from

2016-05-20 Thread Liam Proven
On 29 April 2016 at 19:49, Mouse  wrote:
>>> True, but again, *you shouldn't have to*. It means programmer
>>> effort, brain power, is being wasted on thinking about being safe
>>> instead of spent on writing better programs.
>
> True, but...
>
>> One side effect of this is that it makes a lot of C programmers
>> pedants.
>
> ...this is also true, and it means the development of a mindset that's
> better equipped to catch higher-level mistakes as well as the low-level
> mistakes.

My points are:

* not everyone is _able_ to develop that mindset.
* some (many) that cannot, think that they can.
* humans are fallible.
* modern codebases are _vast_ and hugely complex -- to big for any
individual to absorb & comprehend.

> It's true that C is easy to use unsafely.  However, (a) it arose as an
> OS implementation language, for which some level of unsafeness is
> necessary, and (b) to paraphrase a famous remark about Unix, I suspect
> it is not possible to eliminate the ability to do stupid things in C
> without also eliminating the ability to do some clever things in C.

I think that the key thing is not to offer people alternatives that
make it safer at the cost of removal of the clever stuff. It's to
offer other clever stuff instead. C is famously unreadable, and yet
most modern languages ape its syntax.

> Of course, the question is not whether C has flaws.  The question is
> why it's still being used despite those flaws.  The answer, I suspect,
> is what someone said about it being good enough.

"Worse is better."

>> My value system doesn't jive with smart phones.
>
> I would have no problem with them if they were documented.  But I've
> yet to find one that is.  I worked on a project writing code for a new
> Android phone, once, and even as developers we had to use binary blob
> drivers for important pieces.  (It also taught me how horrible the
> Android build system is.)
>
> Mind you, if/when I find one that _is_ documented

There _are_ FOSS offerings. The Jolla Sailfish devices, for instance.

-- 
Liam Proven • Profile: http://lproven.livejournal.com/profile
Email: lpro...@cix.co.uk • GMail/G+/Twitter/Flickr/Facebook: lproven
MSN: lpro...@hotmail.com • Skype/AIM/Yahoo/LinkedIn: liamproven
Cell/Mobiles: +44 7939-087884 (UK) • +420 702 829 053 (ČR)


Re: strangest systems I've sent email from

2016-05-18 Thread Fred Cisin
Apple Lisa operating system . . . 
implemented in Pascal

. . . . which was a pretty big mainstream success for proving
Pascal as suitable for developing systems software.
"Apple hired brilliant people for the project.  BUT, they had so little 
real-world experience that they didn't even realize what a mistake it 
would be to write an OS in a high level language.
What a bizarre statement, given that there was plenty of precedent for 
doing so very successfully.

particular compilers used. But clearly there had been successful (large
 scale) operating systems written in high level languages well before 
the  Mac.


On Wed, 18 May 2016, Toby Thain wrote:
There were very many; the Brinch Hansen book, "Classic Operating Systems" 
contains many examples. Some languages used before 1974 were even better 
suited than C to this purpose.

Do people think computers didn't exist before 1974?!


No, those were the most fun!

However, the previous exchange was about MICROCOMPUTER operating systems, 
specifically use of Pascal for writing the Lisa.
In 1974, MOST microcomputer systems programmers were waiting on hardware 
delivery.


Even by the time of the Lisa, microprocessors were pretty marginal for 
supporting an operating system written in a high level language.



I'm trying to imagine a microcomputer OS written in PL/1!   That might be 
fun!




Night before last, Windoze did an unauthorized conversion of one of my 
machines from Win7 32 bit Home Premium to Win10 32 bit Home Premium.
Most of my other PCs are XP, and the more that I used Win7, the less that 
I disliked it.
I wanted to try the Win10 for a few weeks to see if I could learn 
to stand it.
But, after a day of struggling with it, trying to get the configuration 
back into a form that could do the work that that machine has to do, I 
rolled it back to Win7.



Bill Gates must be made into a millionaire!
($70 - $80 billion reduction (nobody seems to get it without explanation))




Re: strangest systems I've sent email from

2016-05-18 Thread Chris Hanson
On May 18, 2016, at 8:33 AM, John Willis  wrote:

> Let's not forget that the bulk of the Apple Lisa operating system and
> at least large parts of the original Macintosh system software were also
> implemented in Pascal (though IIRC hand-translated into 68k assembly
> language), which was a pretty big mainstream success for proving
> Pascal as suitable for developing systems software.

Most of the original Mac operating system was originally written in 68K 
assembly, not just a hand translation of Pascal code. There were some rewrites 
of Pascal code from Lisa, for example the Memory Manager, but they went beyond 
hand translation.

Lisa mostly used Clascal, the first iteration of Object Pascal that Apple hired 
Wirth to help design. Clascal and the Lisa Application Toolkit led directly to 
Object Pascal and MacApp.

  -- Chris



Re: strangest systems I've sent email from

2016-05-18 Thread Swift Griggs
On Wed, 18 May 2016, Toby Thain wrote:
> [...] Some languages used before 1974 were even better suited than C to 
> this purpose. Do people think computers didn't exist before 1974?!

Only the cool kids could touch them until the "personal" computers started 
coming along. I'm not really aware of when that took hold other than some 
vague sense of history around the late 1970's or early 1980's. 

Being that I was born in 1974, I was rather busy that year. I had to learn 
English so I could learn to code. Plus, there was the bother of potty 
training, walking, etc... That kind of thing kept me far too busy in the 
late 1970's. I came late to the party when I started coding around eight 
years old (IIRC) on my Timex Sinclair 1000.

-Swift




Re: strangest systems I've sent email from

2016-05-18 Thread Sean Conner
It was thus said that the Great Fred Cisin once stated:
> On Wed, 18 May 2016, John Willis wrote:
> >Let's not forget that the bulk of the Apple Lisa operating system and
> >at least large parts of the original Macintosh system software were also
> >implemented in Pascal (though IIRC hand-translated into 68k assembly
> >language), which was a pretty big mainstream success for proving
> >Pascal as suitable for developing systems software.
> 
> At the time, it was sometimes interpreted differently:
> "Apple hired brilliant people for the project.  BUT, they had so little 
> real-world experience that they didn't even realize what a mistake it 
> would be to write an OS in a high level language.  Apple had to rewrite it 
> in assembly for the Mac, to make it fast enough to be usable.  Is Steve 
> Jobs color blind?  He keeps trying to make machines with very high 
> resolution, but balck and white, and keeps trying to seal them off from 
> the rest of the world."(- cHead)

  I thought the primary reason for the Pascal->hand assembly was more due to
memory contraints (64K ROM, 128K RAM for the original Mac) than actual
speed (although that too, was probably a concern).

  -spc



Re: strangest systems I've sent email from

2016-05-18 Thread Toby Thain

On 2016-05-18 3:17 PM, Paul Koning wrote:



On May 18, 2016, at 2:44 PM, Fred Cisin  wrote:

On Wed, 18 May 2016, John Willis wrote:

Let's not forget that the bulk of the Apple Lisa operating system and
at least large parts of the original Macintosh system software were also
implemented in Pascal (though IIRC hand-translated into 68k assembly
language), which was a pretty big mainstream success for proving
Pascal as suitable for developing systems software.


At the time, it was sometimes interpreted differently:
"Apple hired brilliant people for the project.  BUT, they had so little 
real-world experience that they didn't even realize what a mistake it would be to 
write an OS in a high level language.


What a bizarre statement, given that there was plenty of precedent for doing so 
very successfully.

It might be a valid statement if made much more nuanced, say by

talking about the slowness of the processors, or the inefficiency of the
particular compilers used. But clearly there had been successful (large
scale) operating systems written in high level languages well before the
Mac.

There were very many; the Brinch Hansen book, "Classic Operating 
Systems" contains many examples. Some languages used before 1974 were 
even better suited than C to this purpose.


Do people think computers didn't exist before 1974?!

--Toby



paul







Re: strangest systems I've sent email from

2016-05-18 Thread Toby Thain

On 2016-05-18 3:40 PM, Fred Cisin wrote:

At the time, it was sometimes interpreted differently: "Apple hired
brilliant people for the project.  BUT, they had so little real-world
experience that they didn't even realize what a mistake it would be
to write an OS in a high level language.


On Wed, 18 May 2016, Paul Koning wrote:

What a bizarre statement, given that there was plenty of precedent for
doing so very successfully.
It might be a valid statement if made much more nuanced, say by
talking about the slowness of the processors, or the inefficiency of
the particular compilers used.  But clearly there had been successful
(large scale) operating systems written in high level languages well
before the Mac.


[actually Lisa was the issue]
I think that there was a general perception that microprocessors were
not fast enough to function properly without hand-optimized assembly
language.


That is true of some modules, like QuickDraw on the Mac. It is certainly 
not true generally.


--Toby


In those days, even games did not need "time-delay loops".

But, "Moore's Law" held that it wouldn't be much longer.
Just one doubling of the speed of the Lisa's hardware would have been
enough to silence the speed complaints.






Re: strangest systems I've sent email from

2016-05-18 Thread Paul Koning

> On May 18, 2016, at 2:44 PM, Fred Cisin  wrote:
> 
> On Wed, 18 May 2016, John Willis wrote:
>> Let's not forget that the bulk of the Apple Lisa operating system and
>> at least large parts of the original Macintosh system software were also
>> implemented in Pascal (though IIRC hand-translated into 68k assembly
>> language), which was a pretty big mainstream success for proving
>> Pascal as suitable for developing systems software.
> 
> At the time, it was sometimes interpreted differently:
> "Apple hired brilliant people for the project.  BUT, they had so little 
> real-world experience that they didn't even realize what a mistake it would 
> be to write an OS in a high level language. 

What a bizarre statement, given that there was plenty of precedent for doing so 
very successfully.

It might be a valid statement if made much more nuanced, say by talking about 
the slowness of the processors, or the inefficiency of the particular compilers 
used.  But clearly there had been successful (large scale) operating systems 
written in high level languages well before the Mac.

paul




Re: strangest systems I've sent email from

2016-05-18 Thread Sean Caron

On Wed, 18 May 2016, Fred Cisin wrote:


On Wed, 18 May 2016, Sean Caron wrote:
Pascal was probably the predominant applications development language on 
Mac OS through the mid 1990s or so, no? Certainly all the Toolbox bindings 
were originally written with the idea that people would be developing in 
Pascal.


Windoze also used Pascal subroutine calling convention!

But, the trend was on, over to using C.



Really? Interesting; I did not know that. To this day I've never really 
made any attempts at Windows desktop application development.


Did the Windows APIs also send and receive Pascal strings? I recall the 
conversions confounding me for a bit when I was trying to learn C and the 
Macintosh Toolbox concurrently as a youngster.


Best,

Sean



Re: strangest systems I've sent email from

2016-05-18 Thread Fred Cisin

On Wed, 18 May 2016, Sean Caron wrote:
Pascal was probably the predominant applications development language on Mac 
OS through the mid 1990s or so, no? Certainly all the Toolbox bindings were 
originally written with the idea that people would be developing in Pascal.


Windoze also used Pascal subroutine calling convention!

But, the trend was on, over to using C.



Re: strangest systems I've sent email from

2016-05-18 Thread Fred Cisin

On Wed, 18 May 2016, John Willis wrote:

Let's not forget that the bulk of the Apple Lisa operating system and
at least large parts of the original Macintosh system software were also
implemented in Pascal (though IIRC hand-translated into 68k assembly
language), which was a pretty big mainstream success for proving
Pascal as suitable for developing systems software.


At the time, it was sometimes interpreted differently:
"Apple hired brilliant people for the project.  BUT, they had so little 
real-world experience that they didn't even realize what a mistake it 
would be to write an OS in a high level language.  Apple had to rewrite it 
in assembly for the Mac, to make it fast enough to be usable.  Is Steve 
Jobs color blind?  He keeps trying to make machines with very high 
resolution, but balck and white, and keeps trying to seal them off from 
the rest of the world."(- cHead)







Re: strangest systems I've sent email from

2016-05-18 Thread Sean Caron

On Wed, 18 May 2016, John Willis wrote:




The problem is that type #2 here covers most people, and sadly, the
ivory towers of the industry & academia do not accept that certain
languages or language features are actually widely-liked or attractive
to people because they do not fit with the prevailing wisdom. So,
although, for instance, TP & later Delphi demonstrated that the Pascal
family can be appealing, practical, and a desirable choice; and the
Oberon OS proves that the Pascal family can be used to build an
entire, practical, useful and widely-used (in its niche) OS from the
metal up.



Let's not forget that the bulk of the Apple Lisa operating system and
at least large parts of the original Macintosh system software were also
implemented in Pascal (though IIRC hand-translated into 68k assembly
language), which was a pretty big mainstream success for proving
Pascal as suitable for developing systems software.



Pascal was probably the predominant applications development language on 
Mac OS through the mid 1990s or so, no? Certainly all the Toolbox bindings 
were originally written with the idea that people would be developing in 
Pascal.


Best,

Sean



Re: strangest systems I've sent email from

2016-05-18 Thread John Willis
>
>
> The problem is that type #2 here covers most people, and sadly, the
> ivory towers of the industry & academia do not accept that certain
> languages or language features are actually widely-liked or attractive
> to people because they do not fit with the prevailing wisdom. So,
> although, for instance, TP & later Delphi demonstrated that the Pascal
> family can be appealing, practical, and a desirable choice; and the
> Oberon OS proves that the Pascal family can be used to build an
> entire, practical, useful and widely-used (in its niche) OS from the
> metal up.
>
>
Let's not forget that the bulk of the Apple Lisa operating system and
at least large parts of the original Macintosh system software were also
implemented in Pascal (though IIRC hand-translated into 68k assembly
language), which was a pretty big mainstream success for proving
Pascal as suitable for developing systems software.


Re: strangest systems I've sent email from

2016-05-18 Thread Liam Proven
On 17 May 2016 at 20:21, Sean Conner  wrote:
> It was thus said that the Great Liam Proven once stated:
>> This has been waiting for a reply for too long...
>
>   As has this ...

:-)

>> On 4 May 2016 at 20:59, Sean Conner  wrote:
>> >
>> >   Part of that was the MMU-less 68000.  It certainly made message passing
>> > cheap (since you could just send a pointer and avoid copying the message)
>>
>> Well, yes. I know several Amiga fans who refer to classic AmigaOS as
>> being a de-facto microkernel implementation, but ISTM that that is
>> overly simplistic. The point of microkernels, ISTM, is that the
>> different elements of an OS are in different processes, isolated by
>> memory management, and communicate over defined interfaces to work
>> together to provide the functionality of a conventional monolithic
>> kernel.
>
>   Nope, memory management is not a requirement for a microkernel.  It's a
> "nice to have" but not "fundamental to implementation." Just as you can have
> a preemptive kernel on a CPU without memory management (any 68000 based
> system) or user/kernel level instruction split (any 8-bit CPU).
>
>> If they're all in the same memory space, then even if they're
>> functionally separate, they can communicate through shared memory --
>
>   While the Amiga may have "cheated" by passing a reference to the message
> instead of copying it, conceptually, it was passing a message (for all the
> user knows, the message *could* be copied before being sent).  I still
> consider AmigaOS as a message based operating system.
>
>   Also, QNX was first written for the 8088, a machine not known for having a
> memory management unit, nor supervisor mode instructions.

There is surely an important difference between requirements to meet a
definition, and requirements for being of pragmatic value.

Sure, there are microkernels on MMU-less hardware. I was vaguely aware
of QNX on 8088 although it had slipped my mind.

But in a real-world general-purpose computer, is there much point?
AIUI the main intention of microkernel designs in the Unix space was
to promote code simplicity and clarity, ease debugging, and increase
reliability. Is that not so?

>> >  I think what made the Amiga so fast (even with a 7.1MHz CPU)
>> > was the specialized hardware.  You pretty much used the MC68000 to script
>> > the hardware.
>>
>> That seems a bit harsh! :-)
>
>   Not in light of this blog article: http://prog21.dadgum.com/173.html

Conceded!

>   While I might not fully agree with his views, he does make some compelling
> arguments and makes me think.
>
>> But Curtis Yarvin is a strange person, and at least via his
>> pseudonymous mouthpiece Mencius Moldbug, has some unpalatable views.
>>
>> You are, I presume, aware of the controversy over his appearance at
>> LambdaConf this year?
>
>   Yes I am.  My view:  no one is forcing you to attend his talk.  And if no
> one attends his talks, the liklihood of him appearing again (or at another
> conference) goes down.  What is wrong with these people?

Well, I take your point and TBH I think I'd mostly agree with you. But
I can also see that for many people Mencius Moldbug goes too far.

>> >   Nice in theory.  Glacial performance in practice.
>>
>> Everything was glacial once.
>>
>> We've had 4 decades of very well-funded R aimed at producing faster
>> C machines. Oddly, x86 has remained ahead of the pack and most of the
>> RISC families ended up sidelined, except ARM. Funny how things turn
>> out.
>
>   The Wintel monopoly of the desktop flooded Intel with enough money to keep
> the x86 line going.  Given enough money, even pigs can fly.

True.

I liked a friend's suggestion recently on Twitter:

@sbisson

Here's a thought. If Intel updated the i960 RISC architecture for
modern processes, would it have the ARM competitor it's looking for?


>   Internally, the x86 lines is RISC.  The legacy instructions are read in
> and translated into an internal machine lanuage that is more RISC like than
> CISC.  All sorts of crazy things going on inside that CPU architecture.

True, but it's not alone in that. Transmeta went further and I am
still sad that they did not push the idea further.

>> >   The Lisp machines had tagged memory to help with the garbage collection
>> > and avoid wasting tons of memory.  Yeah, it also had CPU instructions like
>> > CAR and CDR (even the IBM 704 had those [4]).  Even the VAX nad QUEUE
>> > instructions to add and remove items from a linked list.  I think it's
>> > really the tagged memory that made the Lisp machines special.
>>
>> We have 64-bit machines now. GPUs are wider still. I think we could
>> afford a few tag bits.
>
>   I personally wouldn't mind a few user bits per byte myself.  I'm not sure
> we'll ever see such a system.

Nor am I. Doesn't stop me hoping, though.

>> >  Of course we need
>> > to burn the disc packs.
>>
>> I don't understand this.
>
>   It's in reference to Alan Kay saying "burn the disc packs" 

Re: strangest systems I've sent email from

2016-05-18 Thread Chris Hanson
On May 17, 2016, at 11:21 AM, Sean Conner  wrote:
> 
>  While the Amiga may have "cheated" by passing a reference to the message
> instead of copying it, conceptually, it was passing a message (for all the
> user knows, the message *could* be copied before being sent).  I still
> consider AmigaOS as a message based operating system.  

This is mostly what real microkernels do as well: Mach got very well-documented 
speedups from zero-copy sends that were specifically enabled by clever use of 
memory management hardware, and these kinds of techniques are still used in xnu 
on OS X today.

  -- Chris



Re: strangest systems I've sent email from

2016-05-18 Thread Chris Hanson
On May 11, 2016, at 5:41 PM, ben  wrote:
> 
> On 5/11/2016 5:54 PM, Toby Thain wrote:
>> On 2016-05-11 7:43 PM, Liam Proven wrote:
>>> ...
>>> If we'd had 4 decades of effort aimed at fast Lisp Machines, I think
>>> we'd have them.
>> 
>> Compiled Lisp, even on generic hardware, is fast. Fast enough, in fact,
>> that it obviated Symbolics. (More in Richard P. Gabriel's history of
>> Lucid.) See also: The newly open sourced Chez Scheme.
> 
> But List still sequential processing as far as I can see? How do you speed 
> that up?

Lisp has had data structures beyond lists for decades.

You don’t think a CLOS object is implemented as lists of lists of lists, do you?

  -- Chris



Re: strangest systems I've sent email from

2016-05-17 Thread Sean Conner
It was thus said that the Great Liam Proven once stated:
> This has been waiting for a reply for too long...

  As has this ...

> On 4 May 2016 at 20:59, Sean Conner  wrote:
> >
> >   Part of that was the MMU-less 68000.  It certainly made message passing
> > cheap (since you could just send a pointer and avoid copying the message)
> 
> Well, yes. I know several Amiga fans who refer to classic AmigaOS as
> being a de-facto microkernel implementation, but ISTM that that is
> overly simplistic. The point of microkernels, ISTM, is that the
> different elements of an OS are in different processes, isolated by
> memory management, and communicate over defined interfaces to work
> together to provide the functionality of a conventional monolithic
> kernel.

  Nope, memory management is not a requirement for a microkernel.  It's a
"nice to have" but not "fundamental to implementation." Just as you can have
a preemptive kernel on a CPU without memory management (any 68000 based
system) or user/kernel level instruction split (any 8-bit CPU).

> If they're all in the same memory space, then even if they're
> functionally separate, they can communicate through shared memory --

  While the Amiga may have "cheated" by passing a reference to the message
instead of copying it, conceptually, it was passing a message (for all the
user knows, the message *could* be copied before being sent).  I still
consider AmigaOS as a message based operating system.  

  Also, QNX was first written for the 8088, a machine not known for having a
memory management unit, nor supervisor mode instructions.

> >  I think what made the Amiga so fast (even with a 7.1MHz CPU)
> > was the specialized hardware.  You pretty much used the MC68000 to script
> > the hardware.
> 
> That seems a bit harsh! :-)

  Not in light of this blog article: http://prog21.dadgum.com/173.html

  While I might not fully agree with his views, he does make some compelling
arguments and makes me think.

> But Curtis Yarvin is a strange person, and at least via his
> pseudonymous mouthpiece Mencius Moldbug, has some unpalatable views.
> 
> You are, I presume, aware of the controversy over his appearance at
> LambdaConf this year?

  Yes I am.  My view:  no one is forcing you to attend his talk.  And if no
one attends his talks, the liklihood of him appearing again (or at another
conference) goes down.  What is wrong with these people?

> >   Nice in theory.  Glacial performance in practice.
> 
> Everything was glacial once.
> 
> We've had 4 decades of very well-funded R aimed at producing faster
> C machines. Oddly, x86 has remained ahead of the pack and most of the
> RISC families ended up sidelined, except ARM. Funny how things turn
> out.

  The Wintel monopoly of the desktop flooded Intel with enough money to keep
the x86 line going.  Given enough money, even pigs can fly.

  Internally, the x86 lines is RISC.  The legacy instructions are read in
and translated into an internal machine lanuage that is more RISC like than
CISC.  All sorts of crazy things going on inside that CPU architecture.

> >   The Lisp machines had tagged memory to help with the garbage collection
> > and avoid wasting tons of memory.  Yeah, it also had CPU instructions like
> > CAR and CDR (even the IBM 704 had those [4]).  Even the VAX nad QUEUE
> > instructions to add and remove items from a linked list.  I think it's
> > really the tagged memory that made the Lisp machines special.
> 
> We have 64-bit machines now. GPUs are wider still. I think we could
> afford a few tag bits.

  I personally wouldn't mind a few user bits per byte myself.  I'm not sure
we'll ever see such a system.

> >  Of course we need
> > to burn the disc packs.
> 
> I don't understand this.

  It's in reference to Alan Kay saying "burn the disc packs" with respect to
Smalltalk (which I was told is a mistake on my part, but then everybody
failed to read Alan's mind about "object oriented" programming and he's
still pissed off about that, so misunderstanding him seems to be par for
course).

  It's also an oblique reference to Charles Moore, who has gone on record as
saying the ANSI Forth Standard is a mistake that no one should use---in
fact, he's gone as far as saying that *any* "standard" Forth misses the
point and that if you want Forth, write it your damn self!

> If you mean that, in order to get to saner, more productive, more
> powerful computer architectures, we need to throw away much of what's
> been built and go right back to building new foundations, then yes, I
> fear so.

  Careful.  Read up on the Intel 432, a state of the art CPU in 1981.

> Yes, tear down the foundations and rebuild, but top of the new
> replacement, much existing code could, in principle, be retained and
> re-used.

  And Real Soon Now, we'll all be running point-to-point connections on
IPv6 ... 

  -spc 


Myths about Lisp [was RE: strangest systems I've sent email from]

2016-05-12 Thread Rich Alderson
From: ben
Sent: Wednesday, May 11, 2016 5:42 PM

> On 5/11/2016 5:54 PM, Toby Thain wrote:

>> On 2016-05-11 7:43 PM, Liam Proven wrote:

>>> If we'd had 4 decades of effort aimed at fast Lisp Machines, I think
>>> we'd have them.

>> Compiled Lisp, even on generic hardware, is fast. Fast enough, in fact,
>> that it obviated Symbolics. (More in Richard P. Gabriel's history of
>> Lucid.) See also: The newly open sourced Chez Scheme.

> But List still sequential processing as far as I can see? How do you 
> speed that up?

This is another of the long-standing myths perpetuated by people who
know nothing about the language.

It has literally been decades since lists were the only data structure
available in Lisp.  If you need non-sequential access to process data,
arrays are the ticket, or hashes.  Choose the best data structure for
to problem at hand.

(Similarly, data types other than atoms have been around since the very
 earliest LISP.  They just weren't sexy, and didn't get a lot of press
 since they weren't novel and difficult to understand.  Math code from
 the MACLISP compiler was better than that generated by the F40 FORTRAN
 compiler.)

>> The myths around garbage collection are also thick, but gc doesn't
>> impede efficiency except under conditions of insufficient headroom (long
>> documented by research old and new).

> Well GC is every Tuesday here. :)

You joke, but in one of the visionary papers on GC from the early 70s, a
tongue-in-cheek scenario was proposed in which GC was done by a portable
system which had sufficient memory would visit large facilities to do
background GC for them on, say, a monthly basis.

Rich

Rich Alderson
Vintage Computing Sr. Systems Engineer
Living Computer Museum
2245 1st Avenue S
Seattle, WA 98134

mailto:ri...@livingcomputermuseum.org

http://www.LivingComputerMuseum.org/


Re: strangest systems I've sent email from

2016-05-11 Thread Mouse
>> Compiled Lisp, even on generic hardware, is fast.  [...]
> But List still sequential processing as far as I can see?  How do you
> speed that up?

Just as sequential as C or PL/I or whatever else.

As for speeding it up?  As a compiler geek I'm strictly a dilettante,
so I don't really know, but I would imagine you'd do it much the way
you would optimize any other language.

/~\ The ASCII Mouse
\ / Ribbon Campaign
 X  Against HTMLmo...@rodents-montreal.org
/ \ Email!   7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B


Re: strangest systems I've sent email from

2016-05-11 Thread ben

On 5/11/2016 5:54 PM, Toby Thain wrote:

On 2016-05-11 7:43 PM, Liam Proven wrote:

...
If we'd had 4 decades of effort aimed at fast Lisp Machines, I think
we'd have them.


Compiled Lisp, even on generic hardware, is fast. Fast enough, in fact,
that it obviated Symbolics. (More in Richard P. Gabriel's history of
Lucid.) See also: The newly open sourced Chez Scheme.


But List still sequential processing as far as I can see? How do you 
speed that up?



The myths around garbage collection are also thick, but gc doesn't
impede efficiency except under conditions of insufficient headroom (long
documented by research old and new).


Well GC is every Tuesday here. :)


--Toby

Ben.





Re: strangest systems I've sent email from

2016-05-11 Thread Toby Thain

On 2016-05-11 7:43 PM, Liam Proven wrote:

...
If we'd had 4 decades of effort aimed at fast Lisp Machines, I think
we'd have them.


Compiled Lisp, even on generic hardware, is fast. Fast enough, in fact, 
that it obviated Symbolics. (More in Richard P. Gabriel's history of 
Lucid.) See also: The newly open sourced Chez Scheme.


The myths around garbage collection are also thick, but gc doesn't 
impede efficiency except under conditions of insufficient headroom (long 
documented by research old and new).


--Toby



Re: strangest systems I've sent email from

2016-05-11 Thread Liam Proven
This has been waiting for a reply for too long...

On 4 May 2016 at 20:59, Sean Conner  wrote:
> It was thus said that the Great Liam Proven once stated:
>> On 29 April 2016 at 21:06, Sean Conner  wrote:
>> > It was thus said that the Great Liam Proven once stated:
>>
>> >   I read that and it doesn't really seem that CAOS would have been much
>> > better than what actually came out.  Okay, the potentially better resource
>> > tracking would be nice, but that's about it really.
>>
>> The story of ARX, the unfinished Acorn OS in Modula-2 for the
>> then-prototype Archimedes, is similar.
>>
>> No, it probably wouldn't have been all that radical.
>>
>> I wonder how much of Amiga OS' famed performance, compactness, etc.
>> was a direct result of its adaptation to the MMU-less 68000, and thus
>> could never have been implemented in a way that could have been made
>> more robust on later chips such as the 68030?
>
>   Part of that was the MMU-less 68000.  It certainly made message passing
> cheap (since you could just send a pointer and avoid copying the message)

Well, yes. I know several Amiga fans who refer to classic AmigaOS as
being a de-facto microkernel implementation, but ISTM that that is
overly simplistic. The point of microkernels, ISTM, is that the
different elements of an OS are in different processes, isolated by
memory management, and communicate over defined interfaces to work
together to provide the functionality of a conventional monolithic
kernel.

My reading suggests that one of the biggest problems with this is performance.

If they're all in the same memory space, then even if they're
functionally separate, they can communicate through shared memory --
meaning that although it might /look/ superficially like a
microkernel, the elements are not in fact isolated from one another,
so practically, pragmatically, it's not a microkernel. If there is no
separation between the cooperating processes, then it's just a
question of design aesthetics, rather than it being a microkernel.

> but QNX shows that even with copying, you can still have a fast operating
> system [1].

Indeed. And of course at one point it looked like QNX would be the
basis for the next-gen Amiga OS:

http://www.amigahistory.plus.com/qnxanno.html

http://www.theregister.co.uk/1999/07/09/qnx_developer_pleas_for_amiga/

http://www.trollaxor.com/2005/06/how-qnx-failed-amiga.html

>  I think what made the Amiga so fast (even with a 7.1MHz CPU)
> was the specialized hardware.  You pretty much used the MC68000 to script
> the hardware.

That seems a bit harsh! :-)

>> >   I spent some hours on the Urbit site.  Between the obscure writing,
>> > entirely new jargon and the "we're going to change the world" attitude,
>> > it very much feels like the Xanadu Project.
>>
>> I am not sure I'm the person to try to summarise it.
>>
>> I've nicked my own effort from my tech blog:
>>
>> I've not tried Urbit. (Yet.)
>>
>> But my impression is this:
>>
>> It's not obfuscatory for the hell of it. It is, yes, but for a valid
>> reason: that he doesn't want to waste time explaining or supporting
>> it. It's hard because you need to be v v bright to fathom it;
>> obscurity is a user filter.
>
>   Red flag #1.

Point, yes.

But Curtis Yarvin is a strange person, and at least via his
pseudonymous mouthpiece Mencius Moldbug, has some unpalatable views.

You are, I presume, aware of the controversy over his appearance at
LambdaConf this year?

E.g.

http://www.inc.com/tess-townsend/why-it-matters-that-an-obscure-programming-conference-is-hosting-mencius-moldbug.html

>> He claims NOT to be a Lisp type, not to have known anything much about
>> the language or LispMs, & to have re-invented some of the underlying
>> ideas independently. I'm not sure I believe this.
>>
>> My view of it from a technical perspective is this. (This may sound
>> over-dramatic.)
>>
>> We are so mired in the C world that modern CPUs are essentially C
>> machines. The conceptual model of C, of essentially all compilers, OSes,
>> imperative languages,  is a flawed one -- it is too simple an
>> abstraction. Q.v. http://www.loper-os.org/?p=55
>
>   Ah yes, Stanislav.  Yet anohther person who goes on and on about how bad
> things are and makes oblique references to a better way without ever going
> into detail and expecting everyone to read his mind (yes, I don't have a
> high opinion of him either).

I gather.

He did, at one point, express fairly clearly what he wanted. The
problem is that he then changed his mind, went off on various tangents
concerning designing his own CPU, and seems to have got mired in that.
Reminds me of Charles Babbage and his failure to produce a finalised
Difference Engine,  because at first he got distracted by tweaking it,
and later distracted by the Analytical Engine.

If he'd focussed on delivering the DE, it would have paid for the AE,
and the world would be a profoundly different place today.

>   And you do realize 

Re: strangest systems I've sent email from

2016-05-09 Thread Jerome H. Fine

>Mouse wrote:


Note that PDP-11 autoincrement and autodecrement exist only when
operating on pointers that are being indirected through, and even then
only when the pointers are in registers.  C ++ and -- work fine on
things other than pointers, and on pointers when not indirecting
through them.


Maybe users of C and C++ don't bother, but those user who write
in assembler for the PDP-11 will often take advantage of the ability
of the autoincrement (and on rare occasions the autodecrement)
properties of the instructions set to add (and rarely subtract) 2 and
sometimes 4 within a given register:

TST(R3)+
CMP   (R3)+,(R3)+
CMP   (R2)+,(R3)+

rather than
ADD   #2,R3
ADD   #4,R3
ADD   #2,R2/  ADD   #2,R3

Naturally, the original value MUST be even and the original value
in R3 can't be the address of an area in the IOPAGE which could
then result in an address exception.  These opportunities occur
more often that would be expected and many users took advantage
of a tiny reduction in the size of a program, especially in the early
days when core storage was severely limited.

Jerome Fine


Re: Programming language failings [was Re: strangest systems I've sent email from]

2016-05-09 Thread Jerome H. Fine

>Dave Wade wrote:


Fortran has an EQUIVALENCE statement, COBOL has redefines. Both allows the
subversion of types at the drop of a hat.


I can think of two examples which were not so much subversion of types
as they were a lack of language flexibility:

(a)  Very early in my FORTRAN experience, I needed to calculate
  values to solve a set of differential equations.  The calculations
  could all be done using ordinary REAL floating point variables.
  However, the precision required to retain sufficient accuracy
  over the range of the solution required the state variables to be
  held as DOUBLE  PRECISION variables.  The simple solution
  was to define the increments to be added to the state variables
  as both REAL  and  DOUBLE  PRECISION and to use the
   EQUIVALENCE statement for both.  The state variables were
   managed in the same manner.  When the increments were being
   calculated, the REAL variables were used.  When the increments
   were being added to the state variables, DOUBLE  PRECISION
   were used.  In order to determine that this was a reasonable
   method, at one point only DOUBLE  PRECISION variables
   were used to see if the overall results were different.  By using
   only REAL variables during the calculation of the increments,
   calculation times was substantially reduced without sacrificing
   any overall accuracy since the increments were always a factor
   of a million or more less than the state variables.  As just one
example, one state variable was a distance of about 20,000,000
feet and each increment was at the most less than one foot.  It
sufficient to calculate the increment with a REAL variable, then
switch to DOUBLE  PRECISION when the increment was
added to the state variable.  At the end of the solution, the state
variable was reduced to about 100,000 feet.  However, using
only REAL variables would have been inaccurate and using only
DOUBLE  PRECISION was a waste of CPU time.

(b)   At one point, the value within a variable for the number of days
   since 1910 exceeded 32767 when the Y2K situation was also close
   to becoming a problem.  However, the total number of days never
   exceeded 65535, so a 4 byte integer was never required.  BUT,
   the FORTRAN flavour which was being used did not support
   UNSIGNED  variables.  The only occasion when that became a
   problem was when the number of days had to be divided by 7
   in order to determine the day of the week.  Since that FORTRAN
   compiler thought the variable was SIGNED, before dividing by 7
   the  "SXT   R0"  instruction extended the high order bit in R1 
into R0

   just prior to the  "DIV  #7,R0"  instruction.  Rather than some very
   complicated manner of managing that single situation, the best 
solution

   seemed to be to change the  "SXT  R0"  instruction to the  "CLR  R0"
   instruction - which, in effect, changed the variable being used 
during

   the divide operation to UNSIGNED from SIGNED.  If that
   FORTRAN compiler had also supported UNSIGNED variables,
   the same sort of solution as was used in (a) might have been used.
   It was recognized that making such a change to the program being
   executed would force future updates through the same requirement,
   however, there were very few  "SXT  R0" being used and only ONE
   which was followed by the  "DIV  #7,R0"  instruction.

Probably the same sort of method could also be used with C when such
situations occur.  In general, although it might be considered a subversion,
a more appropriate point of view might be that the same data requires a
different protocol in the algorithm that is being used.

Jerome Fine


RE: strangest systems I've sent email from

2016-05-04 Thread Rich Alderson
From: Sean Conner
Sent: Wednesday, May 04, 2016 12:00 PM

> It was thus said that the Great Liam Proven once stated:

>> The way LispMs worked, AIUI, is that the machine language wasn't Lisp,
>> it was something far simpler, but designed to map onto Lisp concepts.

> The Lisp machines had tagged memory to help with the garbage collection
> and avoid wasting tons of memory.  Yeah, it also had CPU instructions like
> CAR and CDR (even the IBM 704 had those [4]).

> [4]   It's a joke.  Look it up.

Of course it's a joke.  LISP had CAR and CDR because of the 704, along with
CTR and CPR!  (OK, tags and prefixes got dropped in later implementations on
later hardware.)

Rich


Rich Alderson
Vintage Computing Sr. Systems Engineer
Living Computer Museum
2245 1st Avenue S
Seattle, WA 98134

mailto:ri...@livingcomputermuseum.org

http://www.LivingComputerMuseum.org/


Re: strangest systems I've sent email from

2016-05-04 Thread Sean Conner
It was thus said that the Great Liam Proven once stated:
> On 29 April 2016 at 21:06, Sean Conner  wrote:
> > It was thus said that the Great Liam Proven once stated:
> 
> >   I read that and it doesn't really seem that CAOS would have been much
> > better than what actually came out.  Okay, the potentially better resource
> > tracking would be nice, but that's about it really.
> 
> The story of ARX, the unfinished Acorn OS in Modula-2 for the
> then-prototype Archimedes, is similar.
> 
> No, it probably wouldn't have been all that radical.
> 
> I wonder how much of Amiga OS' famed performance, compactness, etc.
> was a direct result of its adaptation to the MMU-less 68000, and thus
> could never have been implemented in a way that could have been made
> more robust on later chips such as the 68030?

  Part of that was the MMU-less 68000.  It certainly made message passing
cheap (since you could just send a pointer and avoid copying the message)
but QNX shows that even with copying, you can still have a fast operating
system [1].  I think what made the Amiga so fast (even with a 7.1MHz CPU)
was the specialized hardware.  You pretty much used the MC68000 to script
the hardware.

> >   I spent some hours on the Urbit site.  Between the obscure writing,
> > entirely new jargon and the "we're going to change the world" attitude,
> > it very much feels like the Xanadu Project.
> 
> I am not sure I'm the person to try to summarise it.
> 
> I've nicked my own effort from my tech blog:
> 
> I've not tried Urbit. (Yet.)
> 
> But my impression is this:
> 
> It's not obfuscatory for the hell of it. It is, yes, but for a valid
> reason: that he doesn't want to waste time explaining or supporting
> it. It's hard because you need to be v v bright to fathom it;
> obscurity is a user filter.

  Red flag #1.

> He claims NOT to be a Lisp type, not to have known anything much about
> the language or LispMs, & to have re-invented some of the underlying
> ideas independently. I'm not sure I believe this.
> 
> My view of it from a technical perspective is this. (This may sound
> over-dramatic.)
> 
> We are so mired in the C world that modern CPUs are essentially C
> machines. The conceptual model of C, of essentially all compilers, OSes,
> imperative languages,  is a flawed one -- it is too simple an
> abstraction. Q.v. http://www.loper-os.org/?p=55

  Ah yes, Stanislav.  Yet anohther person who goes on and on about how bad
things are and makes oblique references to a better way without ever going
into detail and expecting everyone to read his mind (yes, I don't have a
high opinion of him either).  

  And you do realize that Stanislav does not think highly of Urbit (he
considers Yarvin as being deluded [2]).

> Instead of bytes & blocks of them, the basic unit is the list.
> Operations are defined in terms of lists, not bytes. You define a few
> very simple operations & that's all you need.

  Nice in theory.  Glacial performance in practice.

> The way LispMs worked, AIUI, is that the machine language wasn't Lisp,
> it was something far simpler, but designed to map onto Lisp concepts.
> 
> I have been told that modern CPU design & optimisations & so on map
> really poorly onto this set of primitives. That LispM CPUs were stack
> machines, but modern processors are register machines. I am not
> competent to judge the truth of this.

  The Lisp machines had tagged memory to help with the garbage collection
and avoid wasting tons of memory.  Yeah, it also had CPU instructions like
CAR and CDR (even the IBM 704 had those [4]).  Even the VAX nad QUEUE
instructions to add and remove items from a linked list.  I think it's
really the tagged memory that made the Lisp machines special.

> If Yarvin's claims are to be believed, he has done 2 intertwined things:
> 
> [1] Experimentally or theoretically worked out something akin to these
> primitives.
> [2] Found or worked out a way to map them onto modern CPUs.

  List comprehension I believe.

> This is his "machine code". Something that is not directly connected
> or associated with modern CPUs' machine languages. He has built
> something OTHER but defined his own odd language to describe it &
> implement it. He has DELIBERATELY made it unlike anything else so you
> don't bring across preconceptions & mental impurities. You need to
> start over.

  Eh.  I see that, and raise you a purely functional (as in---pure
functions, no data) implementation of FizzBuzz:

https://codon.com/programming-with-nothing

> But, as far as I can judge, the design is sane, clean, & I am taking
> it that he has reasons for the weirdness. I don't think it's
> gratuitous.

  We'll have to agree to disagree on this point.  I think he's being
intentionally obtuse to appear profound.

> So what on a LispM was the machine language, in Urbit, is Nock. It's a
> whole new machine language layer, placed on top of an existing OS
> stack, so I'm not surprised if it's horrendously 

Re: Programming language failings [was Re: strangest systems I've sent email from]

2016-05-04 Thread Liam Proven
On 29 April 2016 at 22:23, Eric Smith  wrote:
> More than 95% of my work is in C,
> because that's what my clients demand, so people are usually surprised
> to hear my opinion that C is a terrible choice for almost anything.


I am in an analogous boat. Most of my career has been based on
Windows, either supporting it, working on it, writing about it or
whatever. And yet, it is one of my least-favourite OSes.

-- 
Liam Proven • Profile: http://lproven.livejournal.com/profile
Email: lpro...@cix.co.uk • GMail/G+/Twitter/Flickr/Facebook: lproven
MSN: lpro...@hotmail.com • Skype/AIM/Yahoo/LinkedIn: liamproven
Cell/Mobiles: +44 7939-087884 (UK) • +420 702 829 053 (ČR)


Re: strangest systems I've sent email from

2016-05-04 Thread Liam Proven
On 30 April 2016 at 06:36, Tapley, Mark  wrote:
> On Apr 28, 2016, at 8:38 AM, Liam Proven  wrote:
>
>> I loved BeOS but never saw the Be Book. :-(
>
> Sorry if this is a duplicate, I’m behind on the list by a little. I think the 
> Be Book is effectively on-line at
>
> https://www.haiku-os.org/legacy-docs/bebook/
>
> Haiku, open-source and “inspired by BeOS”, is pretty easy to install on a 
> virtual machine on VMWare or (reportedly) VirtualBox. I don’t know how 
> faithful it is to the original Be experience, but just in case you are 
> interested.

Thanks!

Yes, I've experimented with Haiku in VMs.

But cheers for the link to the book -- I'll have a look.


-- 
Liam Proven • Profile: http://lproven.livejournal.com/profile
Email: lpro...@cix.co.uk • GMail/G+/Twitter/Flickr/Facebook: lproven
MSN: lpro...@hotmail.com • Skype/AIM/Yahoo/LinkedIn: liamproven
Cell/Mobiles: +44 7939-087884 (UK) • +420 702 829 053 (ČR)


Re: strangest systems I've sent email from

2016-05-04 Thread Liam Proven
On 29 April 2016 at 21:06, Sean Conner  wrote:
> It was thus said that the Great Liam Proven once stated:
>>
>> Do you really think it's growing? I'd like very much to believe that.
>> I see little sign of it. I do hope you're right.
>
>   I read Hacker News and some of the more programmer related parts of
> Reddit, and yes, there are some vocal people there that would like to see C
> outlawed.


Fair enough. (So do I, sometimes, incidentally.) But my impression is
that it's only a thing among the weirdos, not in industry or
mainstream academia. :(

> I, personally, don't agree with that.  I would however, like to
> see C programmers know assembly language before using C (I think that would
> help a lot, especially with pointer usage).

Sounds like a step back towards older times, and as such, I fear it is
very unlikely.

>   I read that and it doesn't really seem that CAOS would have been much
> better than what actually came out.  Okay, the potentially better resource
> tracking would be nice, but that's about it really.

The story of ARX, the unfinished Acorn OS in Modula-2 for the
then-prototype Archimedes, is similar.

No, it probably wouldn't have been all that radical.

I wonder how much of Amiga OS' famed performance, compactness, etc.
was a direct result of its adaptation to the MMU-less 68000, and thus
could never have been implemented in a way that could have been made
more robust on later chips such as the 68030?


>  I was expecting
> something like Synthesis OS:
>
> http://valerieaurora.org/synthesis/SynthesisOS/
>
> (which *is* mind blowing and I wish the code was available).

Ah, yes, I've read much of that. I agree.

Mind you, Taos was similarly mind-blowing for me.

>> GNOME 1 was heavily based on CORBA. (I believe -- but am not sure --
>> that later versions discarded much of it.) KDE reinvented that
>> particular wheel.
>
>   I blew that one---CORBA lived for about ten years longer than I expected.

Yeah. :-(

I think elements of it are still around, too.


>   Wait ... what?  You first decried about poorly-designed OSes, and then
> went on to say there were better than before?  I'm confused.  Or are you
> saying that we should have something *other* than what we do have?

Well, yes, exactly. That is precisely what I'm saying.

>   I spent some hours on the Urbit site.  Between the obscure writing,
> entirely new jargon and the "we're going to change the world" attitude, it
> very much feels like the Xanadu Project.

I am not sure I'm the person to try to summarise it.

I've nicked my own effort from my tech blog:

I've not tried Urbit. (Yet.)

But my impression is this:

It's not obfuscatory for the hell of it. It is, yes, but for a valid
reason: that he doesn't want to waste time explaining or supporting
it. It's hard because you need to be v v bright to fathom it;
obscurity is a user filter.

He claims NOT to be a Lisp type, not to have known anything much about
the language or LispMs, & to have re-invented some of the underlying
ideas independently. I'm not sure I believe this.

My view of it from a technical perspective is this. (This may sound
over-dramatic.)

We are so mired in the C world that modern CPUs are essentially C
machines. The conceptual model of C, of essentially all compilers,
OSes, imperative languages,  is a flawed one -- it is too simple an
abstraction. Q.v. http://www.loper-os.org/?p=55

The LispM model was a better one, because it's slightly richer. That's
"slightly" in the St Exupery sense, i.e.

"…perfection is attained not when there is nothing more to add, but
when there is nothing more to remove."

Instead of bytes & blocks of them, the basic unit is the list.
Operations are defined in terms of lists, not bytes. You define a few
very simple operations & that's all you need.

http://stackoverflow.com/questions/3482389/how-many-primitives-does-it-take-to-build-a-lisp-machine-ten-seven-or-five

The way LispMs worked, AIUI, is that the machine language wasn't Lisp,
it was something far simpler, but designed to map onto Lisp concepts.

I have been told that modern CPU design & optimisations & so on map
really poorly onto this set of primitives. That LispM CPUs were stack
machines, but modern processors are register machines. I am not
competent to judge the truth of this.

If Yarvin's claims are to be believed, he has done 2 intertwined things:

[1] Experimentally or theoretically worked out something akin to these
primitives.
[2] Found or worked out a way to map them onto modern CPUs.

This is his "machine code". Something that is not directly connected
or associated with modern CPUs' machine languages. He has built
something OTHER but defined his own odd language to describe it &
implement it. He has DELIBERATELY made it unlike anything else so you
don't bring across preconceptions & mental impurities. You need to
start over.

The basic layer is both foundation & insulation. It's technological
insulation, a barrier between the byte 

Re: Plan9 and Inferno (was Re: strangest systems I've sent email from)

2016-05-03 Thread Liam Proven
On 29 April 2016 at 16:51, Brian L. Stuart  wrote:
> On Thu, 4/28/16, Liam Proven  wrote:
 The efforts to fix and improve Unix -- Plan 9, Inferno -- forgotten.
>>
>> It is, true, but it's a sideline now. And the steps made by Inferno
>> seem to have had even less impact. I'd like to see the 2 merged back
>> into 1.
>
> Actually, it's best not to think of Inferno as a successor to Plan 9, but
> as an offshoot.

Understood.

I *think* I understand the motivations for wanting Plan 9 over
Inferno, for example retaining fondness for native CPU compilation
over VMs -- but TBH, given the relatively small influence of either
platform on the wider world, and the close relationship between them,
I don't see there being sufficient differentiation to keep both alive.

But I do not understand the OSes, the communities and so on well
enough; mine is an outsider's perspective.

>   The real story has more to do with Lucent internal
> dynamics than to do with attempting to develop a better research
> platform.  Plan 9 has always been a good platform for research, and
> the fact that it's the most pleasant development environment I've
> ever used is a nice plus.  However, Inferno was created to be a
> platform for products.

Well, yes, but Java won that war, ISTM. And now that Java is losing
that niche too, it's time to strike out for new ground, IMHO.

> The Inferno kernel was basically forked from
> the 2nd Edition Plan9 kernel, and naturally there are some places
> that differ from the current 4th Edition Plan 9 kernel.  However, a
> number of the differences have been resolved over the years, and
> the same guy does most of the maintenance of the compiler suite that's
> used for native Inferno builds and for Plan 9.  Although you usually
> can't just drop driver code from one kernel into the other, the differences
> are not so great as to make the port difficult.  So both still exist and
> both still get some development as people who care decide to make
> changes, but they've never really been in a position to merge.
>
> And BTW, if you like the objectives of the Limbo language in Inferno,
> you'll find a lot of the ideas and lessons learned from it in Go.  After
> all, Rob Pike and Ken Thompson were two of the main people behind
> Go and, of course, they had been at the labs, primarily working on
> Plan 9, before moving to Google.

I am sure you're right but as a non-programmer myself, I'm not very
interested in new languages for the traditional Unix stack. It's the
OS stuff that interests me personally.



-- 
Liam Proven • Profile: http://lproven.livejournal.com/profile
Email: lpro...@cix.co.uk • GMail/G+/Twitter/Flickr/Facebook: lproven
MSN: lpro...@hotmail.com • Skype/AIM/Yahoo/LinkedIn: liamproven
Cell/Mobiles: +44 7939-087884 (UK) • +420 702 829 053 (ČR)


Re: Programming language failings [was Re: strangest systems I've sent email from]

2016-05-02 Thread Toby Thain

On 2016-04-30 5:20 PM, Chuck Guzis wrote:

On 04/30/2016 02:07 PM, Mouse wrote:


Reading this really gives me the impression that it's time to fork
C. There seems to me to be a need for two different languages, which
I might slightly inaccurately call the one C used to be and the one
it has become (and is becoming).



I vividly recall back in the 80s trying to take what we learned about
aggressive optimization of Fortran (or FORTRAN, take your pick) and
apply it to C.   One of the tougher nuts was the issue of pointers.
While pointers are typed in C (including void), it's very difficult for
an automatic optimizer to figure out exactly what's being pointed to,
particularly when a pointer is passed as arguments or re-used.


Indeed. John Regehr writes more on this in this recent post:
http://blog.regehr.org/archives/1307

--Toby




C++, to be sure, is much better in this respect.

--Chuck





Re: Programming language failings [was Re: strangest systems I've sent email from]

2016-05-01 Thread Chuck Guzis
On 04/30/2016 04:31 PM, Sean Conner wrote:

> I believe that's what the C99 keyword "restrict" is meant to address.

Closing the barn door after the horses have run off.  It's not in C++
and *must* be included by the programmer.  I suspect if you take 100 C99
programs, 99 of them will not include "restrict".

Because of legacy C code, "restrict" could not be assumed by default.

Sigh.

--Chuck


Re: Programming language failings [was Re: strangest systems I've sent email from]

2016-05-01 Thread Guy Sotomayor Jr

> On Apr 30, 2016, at 6:39 AM, Diane Bruce  wrote:
> 
> On Fri, Apr 29, 2016 at 03:55:35PM -0700, Chuck Guzis wrote:
>> Those who claim that there's not much difference between C and assembly
>> language have never run into a true CISC machine--or perhaps they rely
>> only on libraries someone else has written.
> 
> Now wait a minute here. C is a very old language. When it was first written
> as a recursive descent compiler, compiler technology was very primitive.
> K style code with primitive compilers pretty much resulted in high level
> assembler. Look at the keyword 'register' and the rationale given for it.
> 
>> 
>> Writing a true global optimizing compiler that generates code as good as
>> assembly is a nearly impossible task.  When you are dealing with a
>> target machine with a large CISC set, it's really tough.
> 
> Now on that we furiously agree. One problem has been getting that through
> to people who insist that C is still a high level assembler and has
> not changed from the time when it was a hand crafted recursive descent
> LR to the modern compiler with all the lovely optimisations we can do.
> 

What does a recursive decent parser to do with code generation and
optimization?

All of the compilers I wrote (admittedly it’s been decades since I wrote one
from scratch) all had recursive decent parsers.  The output from that was a
“program tree” that the global optimizer walked to do transforms that were
the “global” optimizations.  The code generator took the output, generated
code for a synthetic machine.  That was then gone over by the target code
generator to do register allocation and code generation to the final ISA.  The
result of this flow generated pretty respectably optimized code.  The parser
was *very* removed from the code generation.  Frankly, I fail to see what
the implementation choice for the parser has on the quality (or lack thereof)
of the generated code.

> The best way of viewing it is to acknowledge that modern C is effectively
> a new language compared to old K C that had no prototypes. 

ANSI C is the “modern C” vs K C.  However, even ANSI C evolves
every few years to what is effectively a new variant of C that maintains
some backwards compatibility with older versions.

TTFN - Guy

Re: FidoNet ....show [was: History [was Re: strangest systems I've sent email from]]

2016-04-30 Thread Brent Hilpert
On 2016-Apr-30, at 9:05 PM, Rod Smallwood wrote:

> That's interesting I had a FidoBBs back in 1983  No 33
> 
> Written by a guy called Tom Jennings in C.

Tom was on this list, up until somewhere ~ mid-2000s AIR.

http://sensitiveresearch.com/
http://worldpowersystems.com/

. . from his resume:1984 – 1995: Fido Software (d/b/a), author Fido/Fidonet 
software



Re: Programming language failings [was Re: strangest systems I've sent email from]

2016-04-30 Thread Sean Conner
It was thus said that the Great Chuck Guzis once stated:
> On 04/30/2016 02:07 PM, Mouse wrote:
> 
> > Reading this really gives me the impression that it's time to fork
> > C. There seems to me to be a need for two different languages, which
> > I might slightly inaccurately call the one C used to be and the one
> > it has become (and is becoming).
> 
> 
> I vividly recall back in the 80s trying to take what we learned about
> aggressive optimization of Fortran (or FORTRAN, take your pick) and
> apply it to C.   One of the tougher nuts was the issue of pointers.
> While pointers are typed in C (including void), it's very difficult for
> an automatic optimizer to figure out exactly what's being pointed to,
> particularly when a pointer is passed as arguments or re-used.

  I believe that's what the C99 keyword "restrict" is meant to address.  

  -spc



Re: Programming language failings [was Re: strangest systems I've sent email from]

2016-04-30 Thread Diane Bruce
On Sat, Apr 30, 2016 at 05:07:08PM -0400, Mouse wrote:
> > In support of this, Iâ??d encourage everyone who works with C to read Chris 
> > $
> 
> > http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html
> > http://blog.llvm.org/2011/05/what-every-c-programmer-should-know_14.html
> > http://blog.llvm.org/2011/05/what-every-c-programmer-should-know_21.html
> 
> Reading this really gives me the impression that it's time to fork C.
> There seems to me to be a need for two different languages, which I
> might slightly inaccurately call the one C used to be and the one it
> has become (and is becoming).

It has been effectively forked for a long time.
Same name. Look at Fortran II vs. more modern Fortran.
The pain of counting characters for the H command... ugh.

Diane
-- 
- d...@freebsd.org d...@db.net http://www.db.net/~db


Re: Programming language failings [was Re: strangest systems I've sent email from]

2016-04-30 Thread Chris Hanson

On Apr 30, 2016, at 2:07 PM, Mouse  wrote:
> 
> Reading this really gives me the impression that it's time to fork C.
> There seems to me to be a need for two different languages, which I
> might slightly inaccurately call the one C used to be and the one it
> has become (and is becoming).
> 
> The first is the "high-level assembly" language, the one that's
> suitable for things like embedded programming in what C99 calls a
> standalone environment, kernel and low-level library implementations,
> and the like, where you want to do whatever's reasonable from the point
> of view of someone who knows the target machine architecture, even if
> it's formally undefined by the language.
> 
> The second is more the language the author of those posts is talking
> about, where the compiler is allowed to do surprising things for the
> sake of performance.

The author of those posts is the creator and lead developer of LLVM, clang, and 
Swift, and I think he would argue that the second case is also best served by 
the compiler in the first case, because you absolutely want the best 
performance and code size possible.

I'll note too that people are working in progressively more back ends for LLVM 
& clang for embedded uses.

  -- Chris




Re: Programming language failings [was Re: strangest systems I've sent email from]

2016-04-30 Thread Chuck Guzis
On 04/30/2016 02:07 PM, Mouse wrote:

> Reading this really gives me the impression that it's time to fork
> C. There seems to me to be a need for two different languages, which
> I might slightly inaccurately call the one C used to be and the one
> it has become (and is becoming).


I vividly recall back in the 80s trying to take what we learned about
aggressive optimization of Fortran (or FORTRAN, take your pick) and
apply it to C.   One of the tougher nuts was the issue of pointers.
While pointers are typed in C (including void), it's very difficult for
an automatic optimizer to figure out exactly what's being pointed to,
particularly when a pointer is passed as arguments or re-used.

C++, to be sure, is much better in this respect.

--Chuck


Re: Programming language failings [was Re: strangest systems I've sent email from]

2016-04-30 Thread Mouse
> In support of this, Iâ??d encourage everyone who works with C to read Chris $

> http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html
> http://blog.llvm.org/2011/05/what-every-c-programmer-should-know_14.html
> http://blog.llvm.org/2011/05/what-every-c-programmer-should-know_21.html

Reading this really gives me the impression that it's time to fork C.
There seems to me to be a need for two different languages, which I
might slightly inaccurately call the one C used to be and the one it
has become (and is becoming).

The first is the "high-level assembly" language, the one that's
suitable for things like embedded programming in what C99 calls a
standalone environment, kernel and low-level library implementations,
and the like, where you want to do whatever's reasonable from the point
of view of someone who knows the target machine architecture, even if
it's formally undefined by the language.

The second is more the language the author of those posts is talking
about, where the compiler is allowed to do surprising things for the
sake of performance.

The zero_array example slightly bothers me, because the optimization
into a memset is valid only when floating-point zero is all-bits-zero;
while this is something the compiler can know, and is true on "all"
machines, the way it's written doesn't call it out as a
machine-dependent optimization, quite possibly leading people to write
the memset themselves in such cases, producing a different bug.

/~\ The ASCII Mouse
\ / Ribbon Campaign
 X  Against HTMLmo...@rodents-montreal.org
/ \ Email!   7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B


Re: Programming language failings [was Re: strangest systems I've sent email from]

2016-04-30 Thread Diane Bruce
On Sat, Apr 30, 2016 at 12:28:35PM -0700, Chris Hanson wrote:
> On Apr 30, 2016, at 11:43 AM, Diane Bruce  wrote:
> > 
> > We cannot use the same outdated ideas we used to use for 'C'
> > that we used 40 years ago today. Compilers have improved.
> > Know your tools. And that's all I have said. 
> 
> In support of this, I’d encourage everyone who works with C to read Chris 
> Lattner’s “What Every C Programmer Should Know About Undefined Behavior” 
> series from the LLVM blog:
> 
> http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html
> http://blog.llvm.org/2011/05/what-every-c-programmer-should-know_14.html
> http://blog.llvm.org/2011/05/what-every-c-programmer-should-know_21.html

Yes, it's an excellent series. 

> 
> C has fairly well-defined semantics, they just aren’t necessarily what you 
> think they are, and optimizers are taking advantage of them (under the “as 
> if” rule) such that a developer’s idea of what assembly a specific section of 
> C code should generate is not all that accurate these days.
> 

Indeed.

>   -- Chris
> 
> 

Diane
-- 
- d...@freebsd.org d...@db.net http://www.db.net/~db


Re: Programming language failings [was Re: strangest systems I've sent email from]

2016-04-30 Thread Chris Hanson
On Apr 30, 2016, at 11:43 AM, Diane Bruce  wrote:
> 
> We cannot use the same outdated ideas we used to use for 'C'
> that we used 40 years ago today. Compilers have improved.
> Know your tools. And that's all I have said. 

In support of this, I’d encourage everyone who works with C to read Chris 
Lattner’s “What Every C Programmer Should Know About Undefined Behavior” series 
from the LLVM blog:

http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html
http://blog.llvm.org/2011/05/what-every-c-programmer-should-know_14.html
http://blog.llvm.org/2011/05/what-every-c-programmer-should-know_21.html

C has fairly well-defined semantics, they just aren’t necessarily what you 
think they are, and optimizers are taking advantage of them (under the “as if” 
rule) such that a developer’s idea of what assembly a specific section of C 
code should generate is not all that accurate these days.

  -- Chris



Re: Programming language failings [was Re: strangest systems I've sent email from]

2016-04-30 Thread Diane Bruce
On Sat, Apr 30, 2016 at 11:53:40AM -0600, Eric Smith wrote:
> On Sat, Apr 30, 2016 at 7:39 AM, Diane Bruce  wrote:
> > Now on that we furiously agree. One problem has been getting that through
> > to people who insist that C is still a high level assembler and has
> > not changed from the time when it was a hand crafted recursive descent
> > LR to the modern compiler with all the lovely optimisations we can do.
> 
> Sure, the compilers are way better than they were back then. That
> doesn't make the language itself any higher level. The semantic level

To be clear, I never said it did make it any higher level.

> of the language and the level of abstraction it provides have barely
> changed, so the language itself is still a mostly portable substitute

We have for all but specialized applications taken the procrustean approach
and our processors mostly have started to fit the language. 
I remember being related a story from when I was working for
CDC Canada of a Unix port that was done for one of the Cybers. They
essentially emulated a PDP-11 then ported unix on top of that. *shudder*

Oddly, C is not a good fit for the Transputer.

> for assembly language. As Chuck points out, it isn't a good substitute
> for any particular CISC assembly language, but then those aren't
> portable at all.

Also to be clear, I also agreed with that statement.

Are we talking at cross purposes or what? My point is simple.
We cannot use the same outdated ideas we used to use for 'C'
that we used 40 years ago today. Compilers have improved.
Know your tools. And that's all I have said. 

> 

Diane
-- 
- d...@freebsd.org d...@db.net http://www.db.net/~db


Re: Programming language failings [was Re: strangest systems I've sent email from]

2016-04-30 Thread Eric Smith
On Sat, Apr 30, 2016 at 7:39 AM, Diane Bruce  wrote:
> Now on that we furiously agree. One problem has been getting that through
> to people who insist that C is still a high level assembler and has
> not changed from the time when it was a hand crafted recursive descent
> LR to the modern compiler with all the lovely optimisations we can do.

Sure, the compilers are way better than they were back then. That
doesn't make the language itself any higher level. The semantic level
of the language and the level of abstraction it provides have barely
changed, so the language itself is still a mostly portable substitute
for assembly language. As Chuck points out, it isn't a good substitute
for any particular CISC assembly language, but then those aren't
portable at all.


Re: Programming language failings [was Re: strangest systems I've sent email from]

2016-04-30 Thread Peter Cetinski

> On Apr 30, 2016, at 9:39 AM, Diane Bruce  wrote:
> 
> Now wait a minute here. C is a very old language. When it was first written
> as a recursive descent compiler, compiler technology was very primitive.
> K style code with primitive compilers pretty much resulted in high level
> assembler. Look at the keyword 'register' and the rationale given for it.
> 
> Now on that we furiously agree. One problem has been getting that through
> to people who insist that C is still a high level assembler and has
> not changed from the time when it was a hand crafted recursive descent
> LR to the modern compiler with all the lovely optimisations we can do.
> 
> Diane
> -- 
> - d...@freebsd.org d...@db.net http://www.db.net/~db

A good example from my area of vintage computing enthusiasm is the Misosys C 
Compiler for the TRS-80.  It will compile pure K C into intermediate assembly 
language files that you would then feed into the Misosys MRAS assembly language 
development system just as if you were developing the assembly from scratch.  
Of course, the entire process is deathly slow on original hardware and the 
level of optimization is no where near what you could achieve with hand-written 
assembly, both of which are critical concerns on 2 Mhz Z80 systems for any 
non-trivial applications.

https://archive.org/details/MC_C-Language_Compiler_1985_sys_PDF






Re: Programming language failings [was Re: strangest systems I've sent email from]

2016-04-30 Thread Diane Bruce
On Fri, Apr 29, 2016 at 03:55:35PM -0700, Chuck Guzis wrote:
> Those who claim that there's not much difference between C and assembly
> language have never run into a true CISC machine--or perhaps they rely
> only on libraries someone else has written.

Now wait a minute here. C is a very old language. When it was first written
as a recursive descent compiler, compiler technology was very primitive.
K style code with primitive compilers pretty much resulted in high level
assembler. Look at the keyword 'register' and the rationale given for it.

> 
> Writing a true global optimizing compiler that generates code as good as
> assembly is a nearly impossible task.  When you are dealing with a
> target machine with a large CISC set, it's really tough.

Now on that we furiously agree. One problem has been getting that through
to people who insist that C is still a high level assembler and has
not changed from the time when it was a hand crafted recursive descent
LR to the modern compiler with all the lovely optimisations we can do.

The best way of viewing it is to acknowledge that modern C is effectively
a new language compared to old K C that had no prototypes. 


> 
> Speaking from experience, again.

ditto.

> 
> 
> --Chuck
> 
> 

Diane
-- 
- d...@freebsd.org d...@db.net http://www.db.net/~db


  1   2   3   >